Nov 24 17:44:30 localhost kernel: Linux version 5.14.0-639.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025
Nov 24 17:44:30 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 24 17:44:30 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 24 17:44:30 localhost kernel: BIOS-provided physical RAM map:
Nov 24 17:44:30 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 24 17:44:30 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 24 17:44:30 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 24 17:44:30 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 24 17:44:30 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 24 17:44:30 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 24 17:44:30 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 24 17:44:30 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Nov 24 17:44:30 localhost kernel: NX (Execute Disable) protection: active
Nov 24 17:44:30 localhost kernel: APIC: Static calls initialized
Nov 24 17:44:30 localhost kernel: SMBIOS 2.8 present.
Nov 24 17:44:30 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 24 17:44:30 localhost kernel: Hypervisor detected: KVM
Nov 24 17:44:30 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 24 17:44:30 localhost kernel: kvm-clock: using sched offset of 10733928359 cycles
Nov 24 17:44:30 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 24 17:44:30 localhost kernel: tsc: Detected 2800.000 MHz processor
Nov 24 17:44:30 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Nov 24 17:44:30 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Nov 24 17:44:30 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Nov 24 17:44:30 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 24 17:44:30 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 24 17:44:30 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 24 17:44:30 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 24 17:44:30 localhost kernel: Using GB pages for direct mapping
Nov 24 17:44:30 localhost kernel: RAMDISK: [mem 0x2d83a000-0x32c14fff]
Nov 24 17:44:30 localhost kernel: ACPI: Early table checksum verification disabled
Nov 24 17:44:30 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 24 17:44:30 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 24 17:44:30 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 24 17:44:30 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 24 17:44:30 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 24 17:44:30 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 24 17:44:30 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 24 17:44:30 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 24 17:44:30 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 24 17:44:30 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 24 17:44:30 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 24 17:44:30 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 24 17:44:30 localhost kernel: No NUMA configuration found
Nov 24 17:44:30 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Nov 24 17:44:30 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Nov 24 17:44:30 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Nov 24 17:44:30 localhost kernel: Zone ranges:
Nov 24 17:44:30 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 24 17:44:30 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 24 17:44:30 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Nov 24 17:44:30 localhost kernel:   Device   empty
Nov 24 17:44:30 localhost kernel: Movable zone start for each node
Nov 24 17:44:30 localhost kernel: Early memory node ranges
Nov 24 17:44:30 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 24 17:44:30 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 24 17:44:30 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Nov 24 17:44:30 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Nov 24 17:44:30 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 24 17:44:30 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 24 17:44:30 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 24 17:44:30 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Nov 24 17:44:30 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 24 17:44:30 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 24 17:44:30 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 24 17:44:30 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 24 17:44:30 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 24 17:44:30 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 24 17:44:30 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 24 17:44:30 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 24 17:44:30 localhost kernel: TSC deadline timer available
Nov 24 17:44:30 localhost kernel: CPU topo: Max. logical packages:   8
Nov 24 17:44:30 localhost kernel: CPU topo: Max. logical dies:       8
Nov 24 17:44:30 localhost kernel: CPU topo: Max. dies per package:   1
Nov 24 17:44:30 localhost kernel: CPU topo: Max. threads per core:   1
Nov 24 17:44:30 localhost kernel: CPU topo: Num. cores per package:     1
Nov 24 17:44:30 localhost kernel: CPU topo: Num. threads per package:   1
Nov 24 17:44:30 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Nov 24 17:44:30 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 24 17:44:30 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 24 17:44:30 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 24 17:44:30 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 24 17:44:30 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 24 17:44:30 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 24 17:44:30 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 24 17:44:30 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 24 17:44:30 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 24 17:44:30 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 24 17:44:30 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 24 17:44:30 localhost kernel: Booting paravirtualized kernel on KVM
Nov 24 17:44:30 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 24 17:44:30 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 24 17:44:30 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Nov 24 17:44:30 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Nov 24 17:44:30 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Nov 24 17:44:30 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 24 17:44:30 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 24 17:44:30 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64", will be passed to user space.
Nov 24 17:44:30 localhost kernel: random: crng init done
Nov 24 17:44:30 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 24 17:44:30 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 24 17:44:30 localhost kernel: Fallback order for Node 0: 0 
Nov 24 17:44:30 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 24 17:44:30 localhost kernel: Policy zone: Normal
Nov 24 17:44:30 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 24 17:44:30 localhost kernel: software IO TLB: area num 8.
Nov 24 17:44:30 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 24 17:44:30 localhost kernel: ftrace: allocating 49298 entries in 193 pages
Nov 24 17:44:30 localhost kernel: ftrace: allocated 193 pages with 3 groups
Nov 24 17:44:30 localhost kernel: Dynamic Preempt: voluntary
Nov 24 17:44:30 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 24 17:44:30 localhost kernel: rcu:         RCU event tracing is enabled.
Nov 24 17:44:30 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 24 17:44:30 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Nov 24 17:44:30 localhost kernel:         Rude variant of Tasks RCU enabled.
Nov 24 17:44:30 localhost kernel:         Tracing variant of Tasks RCU enabled.
Nov 24 17:44:30 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 24 17:44:30 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 24 17:44:30 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 24 17:44:30 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 24 17:44:30 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 24 17:44:30 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 24 17:44:30 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 24 17:44:30 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 24 17:44:30 localhost kernel: Console: colour VGA+ 80x25
Nov 24 17:44:30 localhost kernel: printk: console [ttyS0] enabled
Nov 24 17:44:30 localhost kernel: ACPI: Core revision 20230331
Nov 24 17:44:30 localhost kernel: APIC: Switch to symmetric I/O mode setup
Nov 24 17:44:30 localhost kernel: x2apic enabled
Nov 24 17:44:30 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Nov 24 17:44:30 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 24 17:44:30 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Nov 24 17:44:30 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 24 17:44:30 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 24 17:44:30 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 24 17:44:30 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 24 17:44:30 localhost kernel: Spectre V2 : Mitigation: Retpolines
Nov 24 17:44:30 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 24 17:44:30 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 24 17:44:30 localhost kernel: RETBleed: Mitigation: untrained return thunk
Nov 24 17:44:30 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 24 17:44:30 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 24 17:44:30 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 24 17:44:30 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 24 17:44:30 localhost kernel: x86/bugs: return thunk changed
Nov 24 17:44:30 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 24 17:44:30 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 24 17:44:30 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 24 17:44:30 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 24 17:44:30 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 24 17:44:30 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Nov 24 17:44:30 localhost kernel: Freeing SMP alternatives memory: 40K
Nov 24 17:44:30 localhost kernel: pid_max: default: 32768 minimum: 301
Nov 24 17:44:30 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 24 17:44:30 localhost kernel: landlock: Up and running.
Nov 24 17:44:30 localhost kernel: Yama: becoming mindful.
Nov 24 17:44:30 localhost kernel: SELinux:  Initializing.
Nov 24 17:44:30 localhost kernel: LSM support for eBPF active
Nov 24 17:44:30 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 24 17:44:30 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 24 17:44:30 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 24 17:44:30 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 24 17:44:30 localhost kernel: ... version:                0
Nov 24 17:44:30 localhost kernel: ... bit width:              48
Nov 24 17:44:30 localhost kernel: ... generic registers:      6
Nov 24 17:44:30 localhost kernel: ... value mask:             0000ffffffffffff
Nov 24 17:44:30 localhost kernel: ... max period:             00007fffffffffff
Nov 24 17:44:30 localhost kernel: ... fixed-purpose events:   0
Nov 24 17:44:30 localhost kernel: ... event mask:             000000000000003f
Nov 24 17:44:30 localhost kernel: signal: max sigframe size: 1776
Nov 24 17:44:30 localhost kernel: rcu: Hierarchical SRCU implementation.
Nov 24 17:44:30 localhost kernel: rcu:         Max phase no-delay instances is 400.
Nov 24 17:44:30 localhost kernel: smp: Bringing up secondary CPUs ...
Nov 24 17:44:30 localhost kernel: smpboot: x86: Booting SMP configuration:
Nov 24 17:44:30 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 24 17:44:30 localhost kernel: smp: Brought up 1 node, 8 CPUs
Nov 24 17:44:30 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Nov 24 17:44:30 localhost kernel: node 0 deferred pages initialised in 9ms
Nov 24 17:44:30 localhost kernel: Memory: 7765920K/8388068K available (16384K kernel code, 5786K rwdata, 13900K rodata, 4188K init, 7176K bss, 616268K reserved, 0K cma-reserved)
Nov 24 17:44:30 localhost kernel: devtmpfs: initialized
Nov 24 17:44:30 localhost kernel: x86/mm: Memory block size: 128MB
Nov 24 17:44:30 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 24 17:44:30 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 24 17:44:30 localhost kernel: pinctrl core: initialized pinctrl subsystem
Nov 24 17:44:30 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 24 17:44:30 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 24 17:44:30 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 24 17:44:30 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 24 17:44:30 localhost kernel: audit: initializing netlink subsys (disabled)
Nov 24 17:44:30 localhost kernel: audit: type=2000 audit(1764006268.505:1): state=initialized audit_enabled=0 res=1
Nov 24 17:44:30 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 24 17:44:30 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 24 17:44:30 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 24 17:44:30 localhost kernel: cpuidle: using governor menu
Nov 24 17:44:30 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 24 17:44:30 localhost kernel: PCI: Using configuration type 1 for base access
Nov 24 17:44:30 localhost kernel: PCI: Using configuration type 1 for extended access
Nov 24 17:44:30 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 24 17:44:30 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 24 17:44:30 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 24 17:44:30 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 24 17:44:30 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 24 17:44:30 localhost kernel: Demotion targets for Node 0: null
Nov 24 17:44:30 localhost kernel: cryptd: max_cpu_qlen set to 1000
Nov 24 17:44:30 localhost kernel: ACPI: Added _OSI(Module Device)
Nov 24 17:44:30 localhost kernel: ACPI: Added _OSI(Processor Device)
Nov 24 17:44:30 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 24 17:44:30 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 24 17:44:30 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 24 17:44:30 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 24 17:44:30 localhost kernel: ACPI: Interpreter enabled
Nov 24 17:44:30 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 24 17:44:30 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Nov 24 17:44:30 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 24 17:44:30 localhost kernel: PCI: Using E820 reservations for host bridge windows
Nov 24 17:44:30 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 24 17:44:30 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 24 17:44:30 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 24 17:44:30 localhost kernel: acpiphp: Slot [3] registered
Nov 24 17:44:30 localhost kernel: acpiphp: Slot [4] registered
Nov 24 17:44:30 localhost kernel: acpiphp: Slot [5] registered
Nov 24 17:44:30 localhost kernel: acpiphp: Slot [6] registered
Nov 24 17:44:30 localhost kernel: acpiphp: Slot [7] registered
Nov 24 17:44:30 localhost kernel: acpiphp: Slot [8] registered
Nov 24 17:44:30 localhost kernel: acpiphp: Slot [9] registered
Nov 24 17:44:30 localhost kernel: acpiphp: Slot [10] registered
Nov 24 17:44:30 localhost kernel: acpiphp: Slot [11] registered
Nov 24 17:44:30 localhost kernel: acpiphp: Slot [12] registered
Nov 24 17:44:30 localhost kernel: acpiphp: Slot [13] registered
Nov 24 17:44:30 localhost kernel: acpiphp: Slot [14] registered
Nov 24 17:44:30 localhost kernel: acpiphp: Slot [15] registered
Nov 24 17:44:30 localhost kernel: acpiphp: Slot [16] registered
Nov 24 17:44:30 localhost kernel: acpiphp: Slot [17] registered
Nov 24 17:44:30 localhost kernel: acpiphp: Slot [18] registered
Nov 24 17:44:30 localhost kernel: acpiphp: Slot [19] registered
Nov 24 17:44:30 localhost kernel: acpiphp: Slot [20] registered
Nov 24 17:44:30 localhost kernel: acpiphp: Slot [21] registered
Nov 24 17:44:30 localhost kernel: acpiphp: Slot [22] registered
Nov 24 17:44:30 localhost kernel: acpiphp: Slot [23] registered
Nov 24 17:44:30 localhost kernel: acpiphp: Slot [24] registered
Nov 24 17:44:30 localhost kernel: acpiphp: Slot [25] registered
Nov 24 17:44:30 localhost kernel: acpiphp: Slot [26] registered
Nov 24 17:44:30 localhost kernel: acpiphp: Slot [27] registered
Nov 24 17:44:30 localhost kernel: acpiphp: Slot [28] registered
Nov 24 17:44:30 localhost kernel: acpiphp: Slot [29] registered
Nov 24 17:44:30 localhost kernel: acpiphp: Slot [30] registered
Nov 24 17:44:30 localhost kernel: acpiphp: Slot [31] registered
Nov 24 17:44:30 localhost kernel: PCI host bridge to bus 0000:00
Nov 24 17:44:30 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 24 17:44:30 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 24 17:44:30 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 24 17:44:30 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 24 17:44:30 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Nov 24 17:44:30 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 24 17:44:30 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Nov 24 17:44:30 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Nov 24 17:44:30 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Nov 24 17:44:30 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Nov 24 17:44:30 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Nov 24 17:44:30 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Nov 24 17:44:30 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Nov 24 17:44:30 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Nov 24 17:44:30 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 24 17:44:30 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Nov 24 17:44:30 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Nov 24 17:44:30 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 24 17:44:30 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 24 17:44:30 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 24 17:44:30 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Nov 24 17:44:30 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 24 17:44:30 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Nov 24 17:44:30 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Nov 24 17:44:30 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 24 17:44:30 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 24 17:44:30 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Nov 24 17:44:30 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Nov 24 17:44:30 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 24 17:44:30 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Nov 24 17:44:30 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Nov 24 17:44:30 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Nov 24 17:44:30 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Nov 24 17:44:30 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 24 17:44:30 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Nov 24 17:44:30 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Nov 24 17:44:30 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 24 17:44:30 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Nov 24 17:44:30 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Nov 24 17:44:30 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 24 17:44:30 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 24 17:44:30 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 24 17:44:30 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 24 17:44:30 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 24 17:44:30 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 24 17:44:30 localhost kernel: iommu: Default domain type: Translated
Nov 24 17:44:30 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 24 17:44:30 localhost kernel: SCSI subsystem initialized
Nov 24 17:44:30 localhost kernel: ACPI: bus type USB registered
Nov 24 17:44:30 localhost kernel: usbcore: registered new interface driver usbfs
Nov 24 17:44:30 localhost kernel: usbcore: registered new interface driver hub
Nov 24 17:44:30 localhost kernel: usbcore: registered new device driver usb
Nov 24 17:44:30 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 24 17:44:30 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 24 17:44:30 localhost kernel: PTP clock support registered
Nov 24 17:44:30 localhost kernel: EDAC MC: Ver: 3.0.0
Nov 24 17:44:30 localhost kernel: NetLabel: Initializing
Nov 24 17:44:30 localhost kernel: NetLabel:  domain hash size = 128
Nov 24 17:44:30 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 24 17:44:30 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Nov 24 17:44:30 localhost kernel: PCI: Using ACPI for IRQ routing
Nov 24 17:44:30 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Nov 24 17:44:30 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Nov 24 17:44:30 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Nov 24 17:44:30 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 24 17:44:30 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 24 17:44:30 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 24 17:44:30 localhost kernel: vgaarb: loaded
Nov 24 17:44:30 localhost kernel: clocksource: Switched to clocksource kvm-clock
Nov 24 17:44:30 localhost kernel: VFS: Disk quotas dquot_6.6.0
Nov 24 17:44:30 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 24 17:44:30 localhost kernel: pnp: PnP ACPI init
Nov 24 17:44:30 localhost kernel: pnp 00:03: [dma 2]
Nov 24 17:44:30 localhost kernel: pnp: PnP ACPI: found 5 devices
Nov 24 17:44:30 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 24 17:44:30 localhost kernel: NET: Registered PF_INET protocol family
Nov 24 17:44:30 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 24 17:44:30 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 24 17:44:30 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 24 17:44:30 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 24 17:44:30 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 24 17:44:30 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 24 17:44:30 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 24 17:44:30 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 24 17:44:30 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 24 17:44:30 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 24 17:44:30 localhost kernel: NET: Registered PF_XDP protocol family
Nov 24 17:44:30 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 24 17:44:30 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 24 17:44:30 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 24 17:44:30 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 24 17:44:30 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Nov 24 17:44:30 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 24 17:44:30 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 24 17:44:30 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 24 17:44:30 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 84869 usecs
Nov 24 17:44:30 localhost kernel: PCI: CLS 0 bytes, default 64
Nov 24 17:44:30 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 24 17:44:30 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Nov 24 17:44:30 localhost kernel: ACPI: bus type thunderbolt registered
Nov 24 17:44:30 localhost kernel: Trying to unpack rootfs image as initramfs...
Nov 24 17:44:30 localhost kernel: Initialise system trusted keyrings
Nov 24 17:44:30 localhost kernel: Key type blacklist registered
Nov 24 17:44:30 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 24 17:44:30 localhost kernel: zbud: loaded
Nov 24 17:44:30 localhost kernel: integrity: Platform Keyring initialized
Nov 24 17:44:30 localhost kernel: integrity: Machine keyring initialized
Nov 24 17:44:30 localhost kernel: Freeing initrd memory: 85868K
Nov 24 17:44:30 localhost kernel: NET: Registered PF_ALG protocol family
Nov 24 17:44:30 localhost kernel: xor: automatically using best checksumming function   avx       
Nov 24 17:44:30 localhost kernel: Key type asymmetric registered
Nov 24 17:44:30 localhost kernel: Asymmetric key parser 'x509' registered
Nov 24 17:44:30 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 24 17:44:30 localhost kernel: io scheduler mq-deadline registered
Nov 24 17:44:30 localhost kernel: io scheduler kyber registered
Nov 24 17:44:30 localhost kernel: io scheduler bfq registered
Nov 24 17:44:30 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 24 17:44:30 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 24 17:44:30 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 24 17:44:30 localhost kernel: ACPI: button: Power Button [PWRF]
Nov 24 17:44:30 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 24 17:44:30 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 24 17:44:30 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 24 17:44:30 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 24 17:44:30 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 24 17:44:30 localhost kernel: Non-volatile memory driver v1.3
Nov 24 17:44:30 localhost kernel: rdac: device handler registered
Nov 24 17:44:30 localhost kernel: hp_sw: device handler registered
Nov 24 17:44:30 localhost kernel: emc: device handler registered
Nov 24 17:44:30 localhost kernel: alua: device handler registered
Nov 24 17:44:30 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 24 17:44:30 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 24 17:44:30 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 24 17:44:30 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 24 17:44:30 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 24 17:44:30 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 24 17:44:30 localhost kernel: usb usb1: Product: UHCI Host Controller
Nov 24 17:44:30 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-639.el9.x86_64 uhci_hcd
Nov 24 17:44:30 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 24 17:44:30 localhost kernel: hub 1-0:1.0: USB hub found
Nov 24 17:44:30 localhost kernel: hub 1-0:1.0: 2 ports detected
Nov 24 17:44:30 localhost kernel: usbcore: registered new interface driver usbserial_generic
Nov 24 17:44:30 localhost kernel: usbserial: USB Serial support registered for generic
Nov 24 17:44:30 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 24 17:44:30 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 24 17:44:30 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 24 17:44:30 localhost kernel: mousedev: PS/2 mouse device common for all mice
Nov 24 17:44:30 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 24 17:44:30 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 24 17:44:30 localhost kernel: rtc_cmos 00:04: registered as rtc0
Nov 24 17:44:30 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 24 17:44:30 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-11-24T17:44:29 UTC (1764006269)
Nov 24 17:44:30 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 24 17:44:30 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 24 17:44:30 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 24 17:44:30 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 24 17:44:30 localhost kernel: usbcore: registered new interface driver usbhid
Nov 24 17:44:30 localhost kernel: usbhid: USB HID core driver
Nov 24 17:44:30 localhost kernel: drop_monitor: Initializing network drop monitor service
Nov 24 17:44:30 localhost kernel: Initializing XFRM netlink socket
Nov 24 17:44:30 localhost kernel: NET: Registered PF_INET6 protocol family
Nov 24 17:44:30 localhost kernel: Segment Routing with IPv6
Nov 24 17:44:30 localhost kernel: NET: Registered PF_PACKET protocol family
Nov 24 17:44:30 localhost kernel: mpls_gso: MPLS GSO support
Nov 24 17:44:30 localhost kernel: IPI shorthand broadcast: enabled
Nov 24 17:44:30 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Nov 24 17:44:30 localhost kernel: AES CTR mode by8 optimization enabled
Nov 24 17:44:30 localhost kernel: sched_clock: Marking stable (1236010790, 149813289)->(1494250839, -108426760)
Nov 24 17:44:30 localhost kernel: registered taskstats version 1
Nov 24 17:44:30 localhost kernel: Loading compiled-in X.509 certificates
Nov 24 17:44:30 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: f7751431c703da8a75244ce96aad68601cf1c188'
Nov 24 17:44:30 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 24 17:44:30 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 24 17:44:30 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 24 17:44:30 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 24 17:44:30 localhost kernel: Demotion targets for Node 0: null
Nov 24 17:44:30 localhost kernel: page_owner is disabled
Nov 24 17:44:30 localhost kernel: Key type .fscrypt registered
Nov 24 17:44:30 localhost kernel: Key type fscrypt-provisioning registered
Nov 24 17:44:30 localhost kernel: Key type big_key registered
Nov 24 17:44:30 localhost kernel: Key type encrypted registered
Nov 24 17:44:30 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 24 17:44:30 localhost kernel: Loading compiled-in module X.509 certificates
Nov 24 17:44:30 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: f7751431c703da8a75244ce96aad68601cf1c188'
Nov 24 17:44:30 localhost kernel: ima: Allocated hash algorithm: sha256
Nov 24 17:44:30 localhost kernel: ima: No architecture policies found
Nov 24 17:44:30 localhost kernel: evm: Initialising EVM extended attributes:
Nov 24 17:44:30 localhost kernel: evm: security.selinux
Nov 24 17:44:30 localhost kernel: evm: security.SMACK64 (disabled)
Nov 24 17:44:30 localhost kernel: evm: security.SMACK64EXEC (disabled)
Nov 24 17:44:30 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 24 17:44:30 localhost kernel: evm: security.SMACK64MMAP (disabled)
Nov 24 17:44:30 localhost kernel: evm: security.apparmor (disabled)
Nov 24 17:44:30 localhost kernel: evm: security.ima
Nov 24 17:44:30 localhost kernel: evm: security.capability
Nov 24 17:44:30 localhost kernel: evm: HMAC attrs: 0x1
Nov 24 17:44:30 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 24 17:44:30 localhost kernel: Running certificate verification RSA selftest
Nov 24 17:44:30 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 24 17:44:30 localhost kernel: Running certificate verification ECDSA selftest
Nov 24 17:44:30 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 24 17:44:30 localhost kernel: clk: Disabling unused clocks
Nov 24 17:44:30 localhost kernel: Freeing unused decrypted memory: 2028K
Nov 24 17:44:30 localhost kernel: Freeing unused kernel image (initmem) memory: 4188K
Nov 24 17:44:30 localhost kernel: Write protecting the kernel read-only data: 30720k
Nov 24 17:44:30 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 24 17:44:30 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 24 17:44:30 localhost kernel: Run /init as init process
Nov 24 17:44:30 localhost kernel:   with arguments:
Nov 24 17:44:30 localhost kernel:     /init
Nov 24 17:44:30 localhost kernel:   with environment:
Nov 24 17:44:30 localhost kernel:     HOME=/
Nov 24 17:44:30 localhost kernel:     TERM=linux
Nov 24 17:44:30 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64
Nov 24 17:44:30 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 24 17:44:30 localhost systemd[1]: Detected virtualization kvm.
Nov 24 17:44:30 localhost systemd[1]: Detected architecture x86-64.
Nov 24 17:44:30 localhost systemd[1]: Running in initrd.
Nov 24 17:44:30 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 24 17:44:30 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 24 17:44:30 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Nov 24 17:44:30 localhost kernel: usb 1-1: Manufacturer: QEMU
Nov 24 17:44:30 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 24 17:44:30 localhost systemd[1]: No hostname configured, using default hostname.
Nov 24 17:44:30 localhost systemd[1]: Hostname set to <localhost>.
Nov 24 17:44:30 localhost systemd[1]: Initializing machine ID from VM UUID.
Nov 24 17:44:30 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 24 17:44:30 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 24 17:44:30 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Nov 24 17:44:30 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 24 17:44:30 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 24 17:44:30 localhost systemd[1]: Reached target Initrd /usr File System.
Nov 24 17:44:30 localhost systemd[1]: Reached target Local File Systems.
Nov 24 17:44:30 localhost systemd[1]: Reached target Path Units.
Nov 24 17:44:30 localhost systemd[1]: Reached target Slice Units.
Nov 24 17:44:30 localhost systemd[1]: Reached target Swaps.
Nov 24 17:44:30 localhost systemd[1]: Reached target Timer Units.
Nov 24 17:44:30 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 24 17:44:30 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Nov 24 17:44:30 localhost systemd[1]: Listening on Journal Socket.
Nov 24 17:44:30 localhost systemd[1]: Listening on udev Control Socket.
Nov 24 17:44:30 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 24 17:44:30 localhost systemd[1]: Reached target Socket Units.
Nov 24 17:44:30 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 24 17:44:30 localhost systemd[1]: Starting Journal Service...
Nov 24 17:44:30 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 24 17:44:30 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 24 17:44:30 localhost systemd[1]: Starting Create System Users...
Nov 24 17:44:30 localhost systemd[1]: Starting Setup Virtual Console...
Nov 24 17:44:30 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 24 17:44:30 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 24 17:44:30 localhost systemd[1]: Finished Create System Users.
Nov 24 17:44:30 localhost systemd-journald[309]: Journal started
Nov 24 17:44:30 localhost systemd-journald[309]: Runtime Journal (/run/log/journal/ce8f254e4b984140abc78040b35476ad) is 8.0M, max 153.6M, 145.6M free.
Nov 24 17:44:30 localhost systemd-sysusers[314]: Creating group 'users' with GID 100.
Nov 24 17:44:30 localhost systemd-sysusers[314]: Creating group 'dbus' with GID 81.
Nov 24 17:44:30 localhost systemd-sysusers[314]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 24 17:44:30 localhost systemd[1]: Started Journal Service.
Nov 24 17:44:30 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 24 17:44:30 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 24 17:44:30 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 24 17:44:30 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 24 17:44:30 localhost systemd[1]: Finished Setup Virtual Console.
Nov 24 17:44:30 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 24 17:44:30 localhost systemd[1]: Starting dracut cmdline hook...
Nov 24 17:44:30 localhost dracut-cmdline[327]: dracut-9 dracut-057-102.git20250818.el9
Nov 24 17:44:30 localhost dracut-cmdline[327]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 24 17:44:30 localhost systemd[1]: Finished dracut cmdline hook.
Nov 24 17:44:30 localhost systemd[1]: Starting dracut pre-udev hook...
Nov 24 17:44:30 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 24 17:44:30 localhost kernel: device-mapper: uevent: version 1.0.3
Nov 24 17:44:30 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 24 17:44:30 localhost kernel: RPC: Registered named UNIX socket transport module.
Nov 24 17:44:30 localhost kernel: RPC: Registered udp transport module.
Nov 24 17:44:30 localhost kernel: RPC: Registered tcp transport module.
Nov 24 17:44:30 localhost kernel: RPC: Registered tcp-with-tls transport module.
Nov 24 17:44:30 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 24 17:44:30 localhost rpc.statd[446]: Version 2.5.4 starting
Nov 24 17:44:30 localhost rpc.statd[446]: Initializing NSM state
Nov 24 17:44:30 localhost rpc.idmapd[451]: Setting log level to 0
Nov 24 17:44:30 localhost systemd[1]: Finished dracut pre-udev hook.
Nov 24 17:44:30 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 24 17:44:30 localhost systemd-udevd[464]: Using default interface naming scheme 'rhel-9.0'.
Nov 24 17:44:30 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 24 17:44:30 localhost systemd[1]: Starting dracut pre-trigger hook...
Nov 24 17:44:30 localhost systemd[1]: Finished dracut pre-trigger hook.
Nov 24 17:44:30 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 24 17:44:30 localhost systemd[1]: Created slice Slice /system/modprobe.
Nov 24 17:44:30 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 24 17:44:30 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 24 17:44:30 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 24 17:44:30 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 24 17:44:30 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 24 17:44:30 localhost systemd[1]: Reached target Network.
Nov 24 17:44:30 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 24 17:44:30 localhost systemd[1]: Starting dracut initqueue hook...
Nov 24 17:44:30 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Nov 24 17:44:30 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 24 17:44:31 localhost kernel: libata version 3.00 loaded.
Nov 24 17:44:31 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Nov 24 17:44:31 localhost kernel: scsi host0: ata_piix
Nov 24 17:44:31 localhost kernel: scsi host1: ata_piix
Nov 24 17:44:31 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Nov 24 17:44:31 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Nov 24 17:44:31 localhost kernel:  vda: vda1
Nov 24 17:44:31 localhost systemd[1]: Mounting Kernel Configuration File System...
Nov 24 17:44:31 localhost systemd[1]: Mounted Kernel Configuration File System.
Nov 24 17:44:31 localhost systemd[1]: Reached target System Initialization.
Nov 24 17:44:31 localhost systemd[1]: Reached target Basic System.
Nov 24 17:44:31 localhost kernel: ata1: found unknown device (class 0)
Nov 24 17:44:31 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 24 17:44:31 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 24 17:44:31 localhost systemd-udevd[490]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 17:44:31 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 24 17:44:31 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 24 17:44:31 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 24 17:44:31 localhost systemd[1]: Found device /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 24 17:44:31 localhost systemd[1]: Reached target Initrd Root Device.
Nov 24 17:44:31 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Nov 24 17:44:31 localhost systemd[1]: Finished dracut initqueue hook.
Nov 24 17:44:31 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Nov 24 17:44:31 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Nov 24 17:44:31 localhost systemd[1]: Reached target Remote File Systems.
Nov 24 17:44:31 localhost systemd[1]: Starting dracut pre-mount hook...
Nov 24 17:44:31 localhost systemd[1]: Finished dracut pre-mount hook.
Nov 24 17:44:31 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709...
Nov 24 17:44:31 localhost systemd-fsck[556]: /usr/sbin/fsck.xfs: XFS file system.
Nov 24 17:44:31 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 24 17:44:31 localhost systemd[1]: Mounting /sysroot...
Nov 24 17:44:32 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 24 17:44:32 localhost kernel: XFS (vda1): Mounting V5 Filesystem 47e3724e-7a1b-439a-9543-b98c9a290709
Nov 24 17:44:32 localhost kernel: XFS (vda1): Ending clean mount
Nov 24 17:44:32 localhost systemd[1]: Mounted /sysroot.
Nov 24 17:44:32 localhost systemd[1]: Reached target Initrd Root File System.
Nov 24 17:44:32 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 24 17:44:32 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 24 17:44:32 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 24 17:44:32 localhost systemd[1]: Reached target Initrd File Systems.
Nov 24 17:44:32 localhost systemd[1]: Reached target Initrd Default Target.
Nov 24 17:44:32 localhost systemd[1]: Starting dracut mount hook...
Nov 24 17:44:32 localhost systemd[1]: Finished dracut mount hook.
Nov 24 17:44:32 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 24 17:44:32 localhost rpc.idmapd[451]: exiting on signal 15
Nov 24 17:44:32 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 24 17:44:32 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 24 17:44:32 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 24 17:44:32 localhost systemd[1]: Stopped target Network.
Nov 24 17:44:32 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 24 17:44:32 localhost systemd[1]: Stopped target Timer Units.
Nov 24 17:44:32 localhost systemd[1]: dbus.socket: Deactivated successfully.
Nov 24 17:44:32 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 24 17:44:32 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 24 17:44:32 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 24 17:44:32 localhost systemd[1]: Stopped target Initrd Default Target.
Nov 24 17:44:32 localhost systemd[1]: Stopped target Basic System.
Nov 24 17:44:32 localhost systemd[1]: Stopped target Initrd Root Device.
Nov 24 17:44:32 localhost systemd[1]: Stopped target Initrd /usr File System.
Nov 24 17:44:32 localhost systemd[1]: Stopped target Path Units.
Nov 24 17:44:32 localhost systemd[1]: Stopped target Remote File Systems.
Nov 24 17:44:32 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 24 17:44:32 localhost systemd[1]: Stopped target Slice Units.
Nov 24 17:44:32 localhost systemd[1]: Stopped target Socket Units.
Nov 24 17:44:32 localhost systemd[1]: Stopped target System Initialization.
Nov 24 17:44:32 localhost systemd[1]: Stopped target Local File Systems.
Nov 24 17:44:32 localhost systemd[1]: Stopped target Swaps.
Nov 24 17:44:32 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 24 17:44:32 localhost systemd[1]: Stopped dracut mount hook.
Nov 24 17:44:32 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 24 17:44:32 localhost systemd[1]: Stopped dracut pre-mount hook.
Nov 24 17:44:32 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Nov 24 17:44:32 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 24 17:44:32 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 24 17:44:32 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 24 17:44:32 localhost systemd[1]: Stopped dracut initqueue hook.
Nov 24 17:44:32 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 24 17:44:32 localhost systemd[1]: Stopped Apply Kernel Variables.
Nov 24 17:44:32 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 24 17:44:32 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Nov 24 17:44:32 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 24 17:44:32 localhost systemd[1]: Stopped Coldplug All udev Devices.
Nov 24 17:44:32 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 24 17:44:32 localhost systemd[1]: Stopped dracut pre-trigger hook.
Nov 24 17:44:32 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 24 17:44:32 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 24 17:44:32 localhost systemd[1]: Stopped Setup Virtual Console.
Nov 24 17:44:32 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Nov 24 17:44:32 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 24 17:44:32 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 24 17:44:32 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 24 17:44:32 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 24 17:44:32 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 24 17:44:32 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 24 17:44:32 localhost systemd[1]: Closed udev Control Socket.
Nov 24 17:44:32 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 24 17:44:32 localhost systemd[1]: Closed udev Kernel Socket.
Nov 24 17:44:32 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 24 17:44:32 localhost systemd[1]: Stopped dracut pre-udev hook.
Nov 24 17:44:32 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 24 17:44:32 localhost systemd[1]: Stopped dracut cmdline hook.
Nov 24 17:44:32 localhost systemd[1]: Starting Cleanup udev Database...
Nov 24 17:44:32 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 24 17:44:32 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 24 17:44:32 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 24 17:44:32 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Nov 24 17:44:32 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 24 17:44:32 localhost systemd[1]: Stopped Create System Users.
Nov 24 17:44:32 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Nov 24 17:44:32 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Nov 24 17:44:32 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 24 17:44:32 localhost systemd[1]: Finished Cleanup udev Database.
Nov 24 17:44:32 localhost systemd[1]: Reached target Switch Root.
Nov 24 17:44:32 localhost systemd[1]: Starting Switch Root...
Nov 24 17:44:32 localhost systemd[1]: Switching root.
Nov 24 17:44:32 localhost systemd-journald[309]: Journal stopped
Nov 24 17:44:35 localhost systemd-journald[309]: Received SIGTERM from PID 1 (systemd).
Nov 24 17:44:35 localhost kernel: audit: type=1404 audit(1764006273.241:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 24 17:44:35 localhost kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 17:44:35 localhost kernel: SELinux:  policy capability open_perms=1
Nov 24 17:44:35 localhost kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 17:44:35 localhost kernel: SELinux:  policy capability always_check_network=0
Nov 24 17:44:35 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 17:44:35 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 17:44:35 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 17:44:35 localhost kernel: audit: type=1403 audit(1764006273.427:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 24 17:44:35 localhost systemd[1]: Successfully loaded SELinux policy in 192.690ms.
Nov 24 17:44:35 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 35.774ms.
Nov 24 17:44:35 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 24 17:44:35 localhost systemd[1]: Detected virtualization kvm.
Nov 24 17:44:35 localhost systemd[1]: Detected architecture x86-64.
Nov 24 17:44:35 localhost systemd-rc-local-generator[635]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 17:44:35 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Nov 24 17:44:35 localhost systemd[1]: Stopped Switch Root.
Nov 24 17:44:35 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 24 17:44:35 localhost systemd[1]: Created slice Slice /system/getty.
Nov 24 17:44:35 localhost systemd[1]: Created slice Slice /system/serial-getty.
Nov 24 17:44:35 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Nov 24 17:44:35 localhost systemd[1]: Created slice User and Session Slice.
Nov 24 17:44:35 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 24 17:44:35 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Nov 24 17:44:35 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 24 17:44:35 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 24 17:44:35 localhost systemd[1]: Stopped target Switch Root.
Nov 24 17:44:35 localhost systemd[1]: Stopped target Initrd File Systems.
Nov 24 17:44:35 localhost systemd[1]: Stopped target Initrd Root File System.
Nov 24 17:44:35 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Nov 24 17:44:35 localhost systemd[1]: Reached target Path Units.
Nov 24 17:44:35 localhost systemd[1]: Reached target rpc_pipefs.target.
Nov 24 17:44:35 localhost systemd[1]: Reached target Slice Units.
Nov 24 17:44:35 localhost systemd[1]: Reached target Swaps.
Nov 24 17:44:35 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Nov 24 17:44:35 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Nov 24 17:44:35 localhost systemd[1]: Reached target RPC Port Mapper.
Nov 24 17:44:35 localhost systemd[1]: Listening on Process Core Dump Socket.
Nov 24 17:44:35 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Nov 24 17:44:35 localhost systemd[1]: Listening on udev Control Socket.
Nov 24 17:44:35 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 24 17:44:35 localhost systemd[1]: Mounting Huge Pages File System...
Nov 24 17:44:35 localhost systemd[1]: Mounting POSIX Message Queue File System...
Nov 24 17:44:35 localhost systemd[1]: Mounting Kernel Debug File System...
Nov 24 17:44:35 localhost systemd[1]: Mounting Kernel Trace File System...
Nov 24 17:44:35 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 24 17:44:35 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 24 17:44:35 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 24 17:44:35 localhost systemd[1]: Starting Load Kernel Module drm...
Nov 24 17:44:35 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Nov 24 17:44:35 localhost systemd[1]: Starting Load Kernel Module fuse...
Nov 24 17:44:35 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 24 17:44:35 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Nov 24 17:44:35 localhost systemd[1]: Stopped File System Check on Root Device.
Nov 24 17:44:35 localhost systemd[1]: Stopped Journal Service.
Nov 24 17:44:35 localhost kernel: fuse: init (API version 7.37)
Nov 24 17:44:35 localhost systemd[1]: Starting Journal Service...
Nov 24 17:44:35 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 24 17:44:35 localhost systemd[1]: Starting Generate network units from Kernel command line...
Nov 24 17:44:35 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 24 17:44:35 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Nov 24 17:44:35 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 24 17:44:35 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 24 17:44:35 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 24 17:44:35 localhost systemd[1]: Mounted Huge Pages File System.
Nov 24 17:44:35 localhost systemd[1]: Mounted POSIX Message Queue File System.
Nov 24 17:44:35 localhost systemd[1]: Mounted Kernel Debug File System.
Nov 24 17:44:35 localhost systemd[1]: Mounted Kernel Trace File System.
Nov 24 17:44:35 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 24 17:44:35 localhost kernel: ACPI: bus type drm_connector registered
Nov 24 17:44:35 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 24 17:44:35 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 24 17:44:35 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 24 17:44:35 localhost systemd[1]: Finished Load Kernel Module drm.
Nov 24 17:44:35 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 24 17:44:35 localhost systemd-journald[677]: Journal started
Nov 24 17:44:35 localhost systemd-journald[677]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 24 17:44:35 localhost systemd[1]: Queued start job for default target Multi-User System.
Nov 24 17:44:35 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 24 17:44:35 localhost systemd[1]: Started Journal Service.
Nov 24 17:44:35 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 24 17:44:35 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Nov 24 17:44:35 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 24 17:44:35 localhost systemd[1]: Finished Load Kernel Module fuse.
Nov 24 17:44:35 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 24 17:44:35 localhost systemd[1]: Finished Generate network units from Kernel command line.
Nov 24 17:44:35 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 24 17:44:35 localhost systemd[1]: Mounting FUSE Control File System...
Nov 24 17:44:35 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 24 17:44:35 localhost systemd[1]: Starting Rebuild Hardware Database...
Nov 24 17:44:35 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 24 17:44:35 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 24 17:44:35 localhost systemd[1]: Starting Load/Save OS Random Seed...
Nov 24 17:44:35 localhost systemd[1]: Starting Create System Users...
Nov 24 17:44:35 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 24 17:44:35 localhost systemd[1]: Mounted FUSE Control File System.
Nov 24 17:44:35 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 24 17:44:35 localhost systemd-journald[677]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 24 17:44:35 localhost systemd-journald[677]: Received client request to flush runtime journal.
Nov 24 17:44:35 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 24 17:44:36 localhost systemd[1]: Finished Create System Users.
Nov 24 17:44:36 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 24 17:44:36 localhost systemd[1]: Finished Load/Save OS Random Seed.
Nov 24 17:44:36 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 24 17:44:36 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 24 17:44:36 localhost systemd[1]: Reached target Preparation for Local File Systems.
Nov 24 17:44:36 localhost systemd[1]: Reached target Local File Systems.
Nov 24 17:44:36 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 24 17:44:36 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 24 17:44:36 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 24 17:44:36 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 24 17:44:36 localhost systemd[1]: Starting Automatic Boot Loader Update...
Nov 24 17:44:36 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 24 17:44:36 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 24 17:44:36 localhost bootctl[695]: Couldn't find EFI system partition, skipping.
Nov 24 17:44:36 localhost systemd[1]: Finished Automatic Boot Loader Update.
Nov 24 17:44:36 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 24 17:44:36 localhost systemd[1]: Starting Security Auditing Service...
Nov 24 17:44:36 localhost systemd[1]: Starting RPC Bind...
Nov 24 17:44:36 localhost systemd[1]: Starting Rebuild Journal Catalog...
Nov 24 17:44:36 localhost systemd[1]: Finished Rebuild Journal Catalog.
Nov 24 17:44:36 localhost auditd[701]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 24 17:44:36 localhost auditd[701]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 24 17:44:36 localhost systemd[1]: Started RPC Bind.
Nov 24 17:44:36 localhost augenrules[706]: /sbin/augenrules: No change
Nov 24 17:44:36 localhost augenrules[721]: No rules
Nov 24 17:44:36 localhost augenrules[721]: enabled 1
Nov 24 17:44:36 localhost augenrules[721]: failure 1
Nov 24 17:44:36 localhost augenrules[721]: pid 701
Nov 24 17:44:36 localhost augenrules[721]: rate_limit 0
Nov 24 17:44:36 localhost augenrules[721]: backlog_limit 8192
Nov 24 17:44:36 localhost augenrules[721]: lost 0
Nov 24 17:44:36 localhost augenrules[721]: backlog 3
Nov 24 17:44:36 localhost augenrules[721]: backlog_wait_time 60000
Nov 24 17:44:36 localhost augenrules[721]: backlog_wait_time_actual 0
Nov 24 17:44:36 localhost augenrules[721]: enabled 1
Nov 24 17:44:36 localhost augenrules[721]: failure 1
Nov 24 17:44:36 localhost augenrules[721]: pid 701
Nov 24 17:44:36 localhost augenrules[721]: rate_limit 0
Nov 24 17:44:36 localhost augenrules[721]: backlog_limit 8192
Nov 24 17:44:36 localhost augenrules[721]: lost 0
Nov 24 17:44:36 localhost augenrules[721]: backlog 0
Nov 24 17:44:36 localhost augenrules[721]: backlog_wait_time 60000
Nov 24 17:44:36 localhost augenrules[721]: backlog_wait_time_actual 0
Nov 24 17:44:36 localhost augenrules[721]: enabled 1
Nov 24 17:44:36 localhost augenrules[721]: failure 1
Nov 24 17:44:36 localhost augenrules[721]: pid 701
Nov 24 17:44:36 localhost augenrules[721]: rate_limit 0
Nov 24 17:44:36 localhost augenrules[721]: backlog_limit 8192
Nov 24 17:44:36 localhost augenrules[721]: lost 0
Nov 24 17:44:36 localhost augenrules[721]: backlog 0
Nov 24 17:44:36 localhost augenrules[721]: backlog_wait_time 60000
Nov 24 17:44:36 localhost augenrules[721]: backlog_wait_time_actual 0
Nov 24 17:44:36 localhost systemd[1]: Started Security Auditing Service.
Nov 24 17:44:36 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 24 17:44:36 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 24 17:44:37 localhost systemd[1]: Finished Rebuild Hardware Database.
Nov 24 17:44:37 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 24 17:44:37 localhost systemd-udevd[730]: Using default interface naming scheme 'rhel-9.0'.
Nov 24 17:44:37 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 24 17:44:37 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 24 17:44:37 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 24 17:44:37 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 24 17:44:37 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 24 17:44:37 localhost systemd-udevd[740]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 17:44:37 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 24 17:44:37 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Nov 24 17:44:37 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 24 17:44:37 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 24 17:44:37 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Nov 24 17:44:37 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Nov 24 17:44:37 localhost kernel: Console: switching to colour dummy device 80x25
Nov 24 17:44:37 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 24 17:44:37 localhost kernel: [drm] features: -context_init
Nov 24 17:44:37 localhost kernel: [drm] number of scanouts: 1
Nov 24 17:44:37 localhost kernel: [drm] number of cap sets: 0
Nov 24 17:44:37 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Nov 24 17:44:37 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 24 17:44:37 localhost kernel: Console: switching to colour frame buffer device 128x48
Nov 24 17:44:37 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 24 17:44:37 localhost kernel: kvm_amd: TSC scaling supported
Nov 24 17:44:37 localhost kernel: kvm_amd: Nested Virtualization enabled
Nov 24 17:44:37 localhost kernel: kvm_amd: Nested Paging enabled
Nov 24 17:44:37 localhost kernel: kvm_amd: LBR virtualization supported
Nov 24 17:44:38 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 24 17:44:38 localhost systemd[1]: Starting Update is Completed...
Nov 24 17:44:38 localhost systemd[1]: Finished Update is Completed.
Nov 24 17:44:38 localhost systemd[1]: Reached target System Initialization.
Nov 24 17:44:38 localhost systemd[1]: Started dnf makecache --timer.
Nov 24 17:44:38 localhost systemd[1]: Started Daily rotation of log files.
Nov 24 17:44:38 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 24 17:44:38 localhost systemd[1]: Reached target Timer Units.
Nov 24 17:44:38 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 24 17:44:38 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 24 17:44:38 localhost systemd[1]: Reached target Socket Units.
Nov 24 17:44:38 localhost systemd[1]: Starting D-Bus System Message Bus...
Nov 24 17:44:38 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 24 17:44:38 localhost systemd[1]: Started D-Bus System Message Bus.
Nov 24 17:44:38 localhost systemd[1]: Reached target Basic System.
Nov 24 17:44:38 localhost dbus-broker-lau[812]: Ready
Nov 24 17:44:38 localhost systemd[1]: Starting NTP client/server...
Nov 24 17:44:38 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 24 17:44:38 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 24 17:44:38 localhost systemd[1]: Starting IPv4 firewall with iptables...
Nov 24 17:44:38 localhost systemd[1]: Started irqbalance daemon.
Nov 24 17:44:38 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 24 17:44:38 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 17:44:38 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 17:44:38 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 17:44:38 localhost systemd[1]: Reached target sshd-keygen.target.
Nov 24 17:44:38 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 24 17:44:38 localhost systemd[1]: Reached target User and Group Name Lookups.
Nov 24 17:44:38 localhost systemd[1]: Starting User Login Management...
Nov 24 17:44:38 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 24 17:44:39 localhost systemd-logind[822]: New seat seat0.
Nov 24 17:44:39 localhost systemd-logind[822]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 24 17:44:39 localhost systemd-logind[822]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 24 17:44:39 localhost systemd[1]: Started User Login Management.
Nov 24 17:44:39 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 24 17:44:39 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 24 17:44:39 localhost chronyd[831]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 24 17:44:39 localhost chronyd[831]: Loaded 0 symmetric keys
Nov 24 17:44:39 localhost chronyd[831]: Using right/UTC timezone to obtain leap second data
Nov 24 17:44:39 localhost chronyd[831]: Loaded seccomp filter (level 2)
Nov 24 17:44:39 localhost systemd[1]: Started NTP client/server.
Nov 24 17:44:39 localhost iptables.init[817]: iptables: Applying firewall rules: [  OK  ]
Nov 24 17:44:39 localhost systemd[1]: Finished IPv4 firewall with iptables.
Nov 24 17:44:41 localhost cloud-init[840]: Cloud-init v. 24.4-7.el9 running 'init-local' at Mon, 24 Nov 2025 17:44:41 +0000. Up 13.16 seconds.
Nov 24 17:44:42 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Nov 24 17:44:42 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Nov 24 17:44:42 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpasocghyj.mount: Deactivated successfully.
Nov 24 17:44:42 localhost systemd[1]: Starting Hostname Service...
Nov 24 17:44:42 localhost systemd[1]: Started Hostname Service.
Nov 24 17:44:42 np0005533938.novalocal systemd-hostnamed[856]: Hostname set to <np0005533938.novalocal> (static)
Nov 24 17:44:42 np0005533938.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 24 17:44:42 np0005533938.novalocal systemd[1]: Reached target Preparation for Network.
Nov 24 17:44:42 np0005533938.novalocal systemd[1]: Starting Network Manager...
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.7381] NetworkManager (version 1.54.1-1.el9) is starting... (boot:c726fd3c-29d8-43c4-9498-0fb31e19789a)
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.7387] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.8583] manager[0x55cce271d080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.8653] hostname: hostname: using hostnamed
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.8654] hostname: static hostname changed from (none) to "np0005533938.novalocal"
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.8662] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.8831] manager[0x55cce271d080]: rfkill: Wi-Fi hardware radio set enabled
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.8832] manager[0x55cce271d080]: rfkill: WWAN hardware radio set enabled
Nov 24 17:44:42 np0005533938.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.9087] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.9089] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.9090] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.9090] manager: Networking is enabled by state file
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.9093] settings: Loaded settings plugin: keyfile (internal)
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.9266] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.9355] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.9387] dhcp: init: Using DHCP client 'internal'
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.9390] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.9404] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 17:44:42 np0005533938.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.9530] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.9538] device (lo): Activation: starting connection 'lo' (5922deac-6043-4983-8df6-40dbc8abd7af)
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.9547] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.9550] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 17:44:42 np0005533938.novalocal systemd[1]: Started Network Manager.
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.9585] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.9589] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.9591] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.9592] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.9594] device (eth0): carrier: link connected
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.9595] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.9601] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 24 17:44:42 np0005533938.novalocal systemd[1]: Reached target Network.
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.9632] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.9636] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.9637] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.9639] manager: NetworkManager state is now CONNECTING
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.9640] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 17:44:42 np0005533938.novalocal systemd[1]: Starting Network Manager Wait Online...
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.9647] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.9650] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 17:44:42 np0005533938.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.9707] dhcp4 (eth0): state changed new lease, address=38.102.83.27
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.9716] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 24 17:44:42 np0005533938.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.9736] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.9925] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.9927] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.9934] device (lo): Activation: successful, device activated.
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.9942] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.9943] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.9945] manager: NetworkManager state is now CONNECTED_SITE
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.9948] device (eth0): Activation: successful, device activated.
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.9953] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 24 17:44:42 np0005533938.novalocal NetworkManager[860]: <info>  [1764006282.9956] manager: startup complete
Nov 24 17:44:43 np0005533938.novalocal systemd[1]: Finished Network Manager Wait Online.
Nov 24 17:44:43 np0005533938.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Nov 24 17:44:43 np0005533938.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Nov 24 17:44:43 np0005533938.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 24 17:44:43 np0005533938.novalocal systemd[1]: Reached target NFS client services.
Nov 24 17:44:43 np0005533938.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Nov 24 17:44:43 np0005533938.novalocal systemd[1]: Reached target Remote File Systems.
Nov 24 17:44:43 np0005533938.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 24 17:44:43 np0005533938.novalocal cloud-init[925]: Cloud-init v. 24.4-7.el9 running 'init' at Mon, 24 Nov 2025 17:44:43 +0000. Up 15.02 seconds.
Nov 24 17:44:43 np0005533938.novalocal cloud-init[925]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 24 17:44:43 np0005533938.novalocal cloud-init[925]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 24 17:44:43 np0005533938.novalocal cloud-init[925]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 24 17:44:43 np0005533938.novalocal cloud-init[925]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 24 17:44:43 np0005533938.novalocal cloud-init[925]: ci-info: |  eth0  | True |         38.102.83.27         | 255.255.255.0 | global | fa:16:3e:11:88:c1 |
Nov 24 17:44:43 np0005533938.novalocal cloud-init[925]: ci-info: |  eth0  | True | fe80::f816:3eff:fe11:88c1/64 |       .       |  link  | fa:16:3e:11:88:c1 |
Nov 24 17:44:43 np0005533938.novalocal cloud-init[925]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 24 17:44:43 np0005533938.novalocal cloud-init[925]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 24 17:44:43 np0005533938.novalocal cloud-init[925]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 24 17:44:43 np0005533938.novalocal cloud-init[925]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Nov 24 17:44:43 np0005533938.novalocal cloud-init[925]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 24 17:44:43 np0005533938.novalocal cloud-init[925]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Nov 24 17:44:43 np0005533938.novalocal cloud-init[925]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 24 17:44:43 np0005533938.novalocal cloud-init[925]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Nov 24 17:44:43 np0005533938.novalocal cloud-init[925]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Nov 24 17:44:43 np0005533938.novalocal cloud-init[925]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Nov 24 17:44:43 np0005533938.novalocal cloud-init[925]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 24 17:44:43 np0005533938.novalocal cloud-init[925]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 24 17:44:43 np0005533938.novalocal cloud-init[925]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 24 17:44:43 np0005533938.novalocal cloud-init[925]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 24 17:44:43 np0005533938.novalocal cloud-init[925]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 24 17:44:43 np0005533938.novalocal cloud-init[925]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Nov 24 17:44:43 np0005533938.novalocal cloud-init[925]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Nov 24 17:44:43 np0005533938.novalocal cloud-init[925]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 24 17:44:45 np0005533938.novalocal useradd[991]: new group: name=cloud-user, GID=1001
Nov 24 17:44:45 np0005533938.novalocal useradd[991]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Nov 24 17:44:45 np0005533938.novalocal useradd[991]: add 'cloud-user' to group 'adm'
Nov 24 17:44:45 np0005533938.novalocal useradd[991]: add 'cloud-user' to group 'systemd-journal'
Nov 24 17:44:45 np0005533938.novalocal useradd[991]: add 'cloud-user' to shadow group 'adm'
Nov 24 17:44:45 np0005533938.novalocal useradd[991]: add 'cloud-user' to shadow group 'systemd-journal'
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: Generating public/private rsa key pair.
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: The key fingerprint is:
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: SHA256:hcDAAuEIqGKAH+TbqBy+2fA7fJpWdtXnlt54/h+BmsY root@np0005533938.novalocal
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: The key's randomart image is:
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: +---[RSA 3072]----+
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: |*+...o.          |
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: |B.o . .. .       |
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: |+o.o    . o      |
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: |o..+     o . ..  |
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: |o.o .   S   o... |
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: |o..  o .  . o+  .|
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: |.+. o .    Eo o. |
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: |  *+..    .  o o.|
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: | o.*=         o.=|
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: +----[SHA256]-----+
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: Generating public/private ecdsa key pair.
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: The key fingerprint is:
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: SHA256:CDacZeZsUsI2oBwTd1KvTiEBFaeFP4vCkB2znje57Jg root@np0005533938.novalocal
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: The key's randomart image is:
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: +---[ECDSA 256]---+
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: | ==O===          |
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: |..*o@X.          |
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: |.+ BOo+.         |
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: |o o..Bo.         |
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: |o. ..++ S        |
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: | oo.*.           |
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: |  .o +           |
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: |   oo            |
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: |  E..            |
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: +----[SHA256]-----+
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: Generating public/private ed25519 key pair.
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: The key fingerprint is:
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: SHA256:NDcN/s6nHgDI7xNQdcH0y1ZOLCOYBiEgE81oGTY+ch8 root@np0005533938.novalocal
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: The key's randomart image is:
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: +--[ED25519 256]--+
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: |  BB... +ooo+.   |
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: | o++o. + o *.. . |
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: |..+ E + + B o + +|
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: | o o . + = o o B |
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: |    .   S . . + .|
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: |       . . + .   |
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: |        o   + .  |
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: |         .   +   |
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: |           .o    |
Nov 24 17:44:45 np0005533938.novalocal cloud-init[925]: +----[SHA256]-----+
Nov 24 17:44:45 np0005533938.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Nov 24 17:44:45 np0005533938.novalocal systemd[1]: Reached target Cloud-config availability.
Nov 24 17:44:45 np0005533938.novalocal systemd[1]: Reached target Network is Online.
Nov 24 17:44:45 np0005533938.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Nov 24 17:44:45 np0005533938.novalocal systemd[1]: Starting Crash recovery kernel arming...
Nov 24 17:44:45 np0005533938.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Nov 24 17:44:45 np0005533938.novalocal systemd[1]: Starting System Logging Service...
Nov 24 17:44:45 np0005533938.novalocal systemd[1]: Starting OpenSSH server daemon...
Nov 24 17:44:45 np0005533938.novalocal sm-notify[1007]: Version 2.5.4 starting
Nov 24 17:44:45 np0005533938.novalocal systemd[1]: Starting Permit User Sessions...
Nov 24 17:44:45 np0005533938.novalocal systemd[1]: Started Notify NFS peers of a restart.
Nov 24 17:44:45 np0005533938.novalocal sshd[1009]: Server listening on 0.0.0.0 port 22.
Nov 24 17:44:45 np0005533938.novalocal sshd[1009]: Server listening on :: port 22.
Nov 24 17:44:45 np0005533938.novalocal systemd[1]: Started OpenSSH server daemon.
Nov 24 17:44:45 np0005533938.novalocal systemd[1]: Finished Permit User Sessions.
Nov 24 17:44:45 np0005533938.novalocal systemd[1]: Started Command Scheduler.
Nov 24 17:44:45 np0005533938.novalocal systemd[1]: Started Getty on tty1.
Nov 24 17:44:45 np0005533938.novalocal systemd[1]: Started Serial Getty on ttyS0.
Nov 24 17:44:45 np0005533938.novalocal systemd[1]: Reached target Login Prompts.
Nov 24 17:44:45 np0005533938.novalocal crond[1012]: (CRON) STARTUP (1.5.7)
Nov 24 17:44:45 np0005533938.novalocal crond[1012]: (CRON) INFO (Syslog will be used instead of sendmail.)
Nov 24 17:44:45 np0005533938.novalocal crond[1012]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 59% if used.)
Nov 24 17:44:45 np0005533938.novalocal crond[1012]: (CRON) INFO (running with inotify support)
Nov 24 17:44:45 np0005533938.novalocal rsyslogd[1008]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1008" x-info="https://www.rsyslog.com"] start
Nov 24 17:44:45 np0005533938.novalocal systemd[1]: Started System Logging Service.
Nov 24 17:44:45 np0005533938.novalocal rsyslogd[1008]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 24 17:44:45 np0005533938.novalocal systemd[1]: Reached target Multi-User System.
Nov 24 17:44:45 np0005533938.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 24 17:44:45 np0005533938.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 24 17:44:45 np0005533938.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 24 17:44:45 np0005533938.novalocal rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 17:44:45 np0005533938.novalocal kdumpctl[1021]: kdump: No kdump initial ramdisk found.
Nov 24 17:44:45 np0005533938.novalocal kdumpctl[1021]: kdump: Rebuilding /boot/initramfs-5.14.0-639.el9.x86_64kdump.img
Nov 24 17:44:45 np0005533938.novalocal cloud-init[1135]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Mon, 24 Nov 2025 17:44:45 +0000. Up 17.57 seconds.
Nov 24 17:44:46 np0005533938.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Nov 24 17:44:46 np0005533938.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Nov 24 17:44:46 np0005533938.novalocal sshd-session[1206]: Connection reset by 38.102.83.114 port 39508 [preauth]
Nov 24 17:44:46 np0005533938.novalocal sshd-session[1225]: Unable to negotiate with 38.102.83.114 port 39850: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Nov 24 17:44:46 np0005533938.novalocal sshd-session[1232]: Connection reset by 38.102.83.114 port 39858 [preauth]
Nov 24 17:44:46 np0005533938.novalocal sshd-session[1243]: Unable to negotiate with 38.102.83.114 port 39864: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Nov 24 17:44:46 np0005533938.novalocal sshd-session[1251]: Unable to negotiate with 38.102.83.114 port 39878: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Nov 24 17:44:46 np0005533938.novalocal sshd-session[1260]: Connection reset by 38.102.83.114 port 39892 [preauth]
Nov 24 17:44:46 np0005533938.novalocal sshd-session[1271]: Connection closed by 38.102.83.114 port 39898 [preauth]
Nov 24 17:44:46 np0005533938.novalocal sshd-session[1279]: Unable to negotiate with 38.102.83.114 port 39908: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Nov 24 17:44:46 np0005533938.novalocal sshd-session[1283]: Unable to negotiate with 38.102.83.114 port 39914: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Nov 24 17:44:46 np0005533938.novalocal dracut[1285]: dracut-057-102.git20250818.el9
Nov 24 17:44:46 np0005533938.novalocal cloud-init[1306]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Mon, 24 Nov 2025 17:44:46 +0000. Up 18.02 seconds.
Nov 24 17:44:46 np0005533938.novalocal dracut[1288]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-639.el9.x86_64kdump.img 5.14.0-639.el9.x86_64
Nov 24 17:44:46 np0005533938.novalocal cloud-init[1336]: #############################################################
Nov 24 17:44:46 np0005533938.novalocal cloud-init[1339]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 24 17:44:46 np0005533938.novalocal cloud-init[1347]: 256 SHA256:CDacZeZsUsI2oBwTd1KvTiEBFaeFP4vCkB2znje57Jg root@np0005533938.novalocal (ECDSA)
Nov 24 17:44:46 np0005533938.novalocal cloud-init[1356]: 256 SHA256:NDcN/s6nHgDI7xNQdcH0y1ZOLCOYBiEgE81oGTY+ch8 root@np0005533938.novalocal (ED25519)
Nov 24 17:44:46 np0005533938.novalocal cloud-init[1363]: 3072 SHA256:hcDAAuEIqGKAH+TbqBy+2fA7fJpWdtXnlt54/h+BmsY root@np0005533938.novalocal (RSA)
Nov 24 17:44:46 np0005533938.novalocal cloud-init[1365]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 24 17:44:46 np0005533938.novalocal cloud-init[1367]: #############################################################
Nov 24 17:44:46 np0005533938.novalocal cloud-init[1306]: Cloud-init v. 24.4-7.el9 finished at Mon, 24 Nov 2025 17:44:46 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 18.25 seconds
Nov 24 17:44:46 np0005533938.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Nov 24 17:44:46 np0005533938.novalocal systemd[1]: Reached target Cloud-init target.
Nov 24 17:44:47 np0005533938.novalocal dracut[1288]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 24 17:44:47 np0005533938.novalocal dracut[1288]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 24 17:44:47 np0005533938.novalocal dracut[1288]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 24 17:44:47 np0005533938.novalocal dracut[1288]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 24 17:44:47 np0005533938.novalocal dracut[1288]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 24 17:44:47 np0005533938.novalocal dracut[1288]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 24 17:44:47 np0005533938.novalocal dracut[1288]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 24 17:44:47 np0005533938.novalocal dracut[1288]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 24 17:44:47 np0005533938.novalocal dracut[1288]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 24 17:44:47 np0005533938.novalocal dracut[1288]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 24 17:44:47 np0005533938.novalocal dracut[1288]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 24 17:44:47 np0005533938.novalocal dracut[1288]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 24 17:44:47 np0005533938.novalocal dracut[1288]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 24 17:44:47 np0005533938.novalocal dracut[1288]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 24 17:44:47 np0005533938.novalocal dracut[1288]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Nov 24 17:44:47 np0005533938.novalocal dracut[1288]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Nov 24 17:44:47 np0005533938.novalocal dracut[1288]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 24 17:44:47 np0005533938.novalocal dracut[1288]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 24 17:44:47 np0005533938.novalocal dracut[1288]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 24 17:44:47 np0005533938.novalocal dracut[1288]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 24 17:44:47 np0005533938.novalocal dracut[1288]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 24 17:44:47 np0005533938.novalocal dracut[1288]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 24 17:44:47 np0005533938.novalocal dracut[1288]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 24 17:44:47 np0005533938.novalocal dracut[1288]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 24 17:44:47 np0005533938.novalocal dracut[1288]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 24 17:44:47 np0005533938.novalocal dracut[1288]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 24 17:44:47 np0005533938.novalocal dracut[1288]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 24 17:44:47 np0005533938.novalocal dracut[1288]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 24 17:44:47 np0005533938.novalocal dracut[1288]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 24 17:44:47 np0005533938.novalocal dracut[1288]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 24 17:44:47 np0005533938.novalocal dracut[1288]: Module 'resume' will not be installed, because it's in the list to be omitted!
Nov 24 17:44:47 np0005533938.novalocal chronyd[831]: Selected source 162.159.200.123 (2.centos.pool.ntp.org)
Nov 24 17:44:47 np0005533938.novalocal chronyd[831]: System clock TAI offset set to 37 seconds
Nov 24 17:44:47 np0005533938.novalocal dracut[1288]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 24 17:44:48 np0005533938.novalocal dracut[1288]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Nov 24 17:44:48 np0005533938.novalocal dracut[1288]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 24 17:44:48 np0005533938.novalocal dracut[1288]: memstrack is not available
Nov 24 17:44:48 np0005533938.novalocal dracut[1288]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 24 17:44:48 np0005533938.novalocal dracut[1288]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 24 17:44:48 np0005533938.novalocal dracut[1288]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 24 17:44:48 np0005533938.novalocal dracut[1288]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 24 17:44:48 np0005533938.novalocal dracut[1288]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 24 17:44:48 np0005533938.novalocal dracut[1288]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 24 17:44:48 np0005533938.novalocal dracut[1288]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 24 17:44:48 np0005533938.novalocal dracut[1288]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 24 17:44:48 np0005533938.novalocal dracut[1288]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 24 17:44:48 np0005533938.novalocal dracut[1288]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 24 17:44:48 np0005533938.novalocal dracut[1288]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 24 17:44:48 np0005533938.novalocal dracut[1288]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 24 17:44:48 np0005533938.novalocal dracut[1288]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 24 17:44:48 np0005533938.novalocal dracut[1288]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 24 17:44:48 np0005533938.novalocal dracut[1288]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 24 17:44:48 np0005533938.novalocal dracut[1288]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 24 17:44:48 np0005533938.novalocal dracut[1288]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 24 17:44:48 np0005533938.novalocal dracut[1288]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 24 17:44:48 np0005533938.novalocal dracut[1288]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 24 17:44:48 np0005533938.novalocal dracut[1288]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 24 17:44:48 np0005533938.novalocal dracut[1288]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 24 17:44:48 np0005533938.novalocal dracut[1288]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 24 17:44:48 np0005533938.novalocal dracut[1288]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 24 17:44:48 np0005533938.novalocal dracut[1288]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 24 17:44:48 np0005533938.novalocal dracut[1288]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 24 17:44:48 np0005533938.novalocal dracut[1288]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 24 17:44:48 np0005533938.novalocal dracut[1288]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 24 17:44:48 np0005533938.novalocal dracut[1288]: memstrack is not available
Nov 24 17:44:48 np0005533938.novalocal dracut[1288]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 24 17:44:48 np0005533938.novalocal dracut[1288]: *** Including module: systemd ***
Nov 24 17:44:49 np0005533938.novalocal dracut[1288]: *** Including module: fips ***
Nov 24 17:44:49 np0005533938.novalocal irqbalance[818]: Cannot change IRQ 25 affinity: Operation not permitted
Nov 24 17:44:49 np0005533938.novalocal irqbalance[818]: IRQ 25 affinity is now unmanaged
Nov 24 17:44:49 np0005533938.novalocal irqbalance[818]: Cannot change IRQ 31 affinity: Operation not permitted
Nov 24 17:44:49 np0005533938.novalocal irqbalance[818]: IRQ 31 affinity is now unmanaged
Nov 24 17:44:49 np0005533938.novalocal irqbalance[818]: Cannot change IRQ 28 affinity: Operation not permitted
Nov 24 17:44:49 np0005533938.novalocal irqbalance[818]: IRQ 28 affinity is now unmanaged
Nov 24 17:44:49 np0005533938.novalocal irqbalance[818]: Cannot change IRQ 32 affinity: Operation not permitted
Nov 24 17:44:49 np0005533938.novalocal irqbalance[818]: IRQ 32 affinity is now unmanaged
Nov 24 17:44:49 np0005533938.novalocal irqbalance[818]: Cannot change IRQ 30 affinity: Operation not permitted
Nov 24 17:44:49 np0005533938.novalocal irqbalance[818]: IRQ 30 affinity is now unmanaged
Nov 24 17:44:49 np0005533938.novalocal irqbalance[818]: Cannot change IRQ 29 affinity: Operation not permitted
Nov 24 17:44:49 np0005533938.novalocal irqbalance[818]: IRQ 29 affinity is now unmanaged
Nov 24 17:44:49 np0005533938.novalocal dracut[1288]: *** Including module: systemd-initrd ***
Nov 24 17:44:49 np0005533938.novalocal dracut[1288]: *** Including module: i18n ***
Nov 24 17:44:49 np0005533938.novalocal dracut[1288]: *** Including module: drm ***
Nov 24 17:44:49 np0005533938.novalocal dracut[1288]: *** Including module: prefixdevname ***
Nov 24 17:44:49 np0005533938.novalocal dracut[1288]: *** Including module: kernel-modules ***
Nov 24 17:44:50 np0005533938.novalocal kernel: block vda: the capability attribute has been deprecated.
Nov 24 17:44:50 np0005533938.novalocal dracut[1288]: *** Including module: kernel-modules-extra ***
Nov 24 17:44:50 np0005533938.novalocal dracut[1288]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Nov 24 17:44:50 np0005533938.novalocal dracut[1288]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Nov 24 17:44:50 np0005533938.novalocal dracut[1288]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Nov 24 17:44:50 np0005533938.novalocal dracut[1288]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Nov 24 17:44:50 np0005533938.novalocal dracut[1288]: *** Including module: qemu ***
Nov 24 17:44:50 np0005533938.novalocal dracut[1288]: *** Including module: fstab-sys ***
Nov 24 17:44:50 np0005533938.novalocal dracut[1288]: *** Including module: rootfs-block ***
Nov 24 17:44:50 np0005533938.novalocal dracut[1288]: *** Including module: terminfo ***
Nov 24 17:44:50 np0005533938.novalocal dracut[1288]: *** Including module: udev-rules ***
Nov 24 17:44:51 np0005533938.novalocal dracut[1288]: Skipping udev rule: 91-permissions.rules
Nov 24 17:44:51 np0005533938.novalocal dracut[1288]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 24 17:44:51 np0005533938.novalocal dracut[1288]: *** Including module: virtiofs ***
Nov 24 17:44:51 np0005533938.novalocal dracut[1288]: *** Including module: dracut-systemd ***
Nov 24 17:44:51 np0005533938.novalocal dracut[1288]: *** Including module: usrmount ***
Nov 24 17:44:51 np0005533938.novalocal dracut[1288]: *** Including module: base ***
Nov 24 17:44:51 np0005533938.novalocal dracut[1288]: *** Including module: fs-lib ***
Nov 24 17:44:51 np0005533938.novalocal dracut[1288]: *** Including module: kdumpbase ***
Nov 24 17:44:52 np0005533938.novalocal dracut[1288]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 24 17:44:52 np0005533938.novalocal dracut[1288]:   microcode_ctl module: mangling fw_dir
Nov 24 17:44:52 np0005533938.novalocal dracut[1288]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 24 17:44:52 np0005533938.novalocal dracut[1288]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 24 17:44:52 np0005533938.novalocal dracut[1288]:     microcode_ctl: configuration "intel" is ignored
Nov 24 17:44:52 np0005533938.novalocal dracut[1288]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 24 17:44:52 np0005533938.novalocal dracut[1288]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 24 17:44:52 np0005533938.novalocal dracut[1288]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 24 17:44:52 np0005533938.novalocal dracut[1288]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 24 17:44:52 np0005533938.novalocal dracut[1288]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 24 17:44:52 np0005533938.novalocal dracut[1288]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 24 17:44:52 np0005533938.novalocal dracut[1288]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 24 17:44:52 np0005533938.novalocal dracut[1288]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 24 17:44:52 np0005533938.novalocal dracut[1288]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 24 17:44:52 np0005533938.novalocal dracut[1288]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 24 17:44:52 np0005533938.novalocal dracut[1288]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 24 17:44:52 np0005533938.novalocal dracut[1288]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 24 17:44:52 np0005533938.novalocal dracut[1288]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 24 17:44:52 np0005533938.novalocal dracut[1288]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 24 17:44:52 np0005533938.novalocal dracut[1288]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 24 17:44:52 np0005533938.novalocal dracut[1288]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 24 17:44:52 np0005533938.novalocal dracut[1288]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 24 17:44:52 np0005533938.novalocal dracut[1288]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 24 17:44:52 np0005533938.novalocal dracut[1288]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 24 17:44:52 np0005533938.novalocal dracut[1288]: *** Including module: openssl ***
Nov 24 17:44:52 np0005533938.novalocal dracut[1288]: *** Including module: shutdown ***
Nov 24 17:44:52 np0005533938.novalocal dracut[1288]: *** Including module: squash ***
Nov 24 17:44:53 np0005533938.novalocal dracut[1288]: *** Including modules done ***
Nov 24 17:44:53 np0005533938.novalocal dracut[1288]: *** Installing kernel module dependencies ***
Nov 24 17:44:53 np0005533938.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 17:44:53 np0005533938.novalocal dracut[1288]: *** Installing kernel module dependencies done ***
Nov 24 17:44:54 np0005533938.novalocal dracut[1288]: *** Resolving executable dependencies ***
Nov 24 17:44:56 np0005533938.novalocal dracut[1288]: *** Resolving executable dependencies done ***
Nov 24 17:44:56 np0005533938.novalocal dracut[1288]: *** Generating early-microcode cpio image ***
Nov 24 17:44:56 np0005533938.novalocal dracut[1288]: *** Store current command line parameters ***
Nov 24 17:44:56 np0005533938.novalocal dracut[1288]: Stored kernel commandline:
Nov 24 17:44:56 np0005533938.novalocal dracut[1288]: No dracut internal kernel commandline stored in the initramfs
Nov 24 17:44:56 np0005533938.novalocal dracut[1288]: *** Install squash loader ***
Nov 24 17:44:57 np0005533938.novalocal dracut[1288]: *** Squashing the files inside the initramfs ***
Nov 24 17:44:58 np0005533938.novalocal dracut[1288]: *** Squashing the files inside the initramfs done ***
Nov 24 17:44:58 np0005533938.novalocal dracut[1288]: *** Creating image file '/boot/initramfs-5.14.0-639.el9.x86_64kdump.img' ***
Nov 24 17:44:58 np0005533938.novalocal dracut[1288]: *** Hardlinking files ***
Nov 24 17:44:58 np0005533938.novalocal dracut[1288]: Mode:           real
Nov 24 17:44:58 np0005533938.novalocal dracut[1288]: Files:          50
Nov 24 17:44:58 np0005533938.novalocal dracut[1288]: Linked:         0 files
Nov 24 17:44:58 np0005533938.novalocal dracut[1288]: Compared:       0 xattrs
Nov 24 17:44:58 np0005533938.novalocal dracut[1288]: Compared:       0 files
Nov 24 17:44:58 np0005533938.novalocal dracut[1288]: Saved:          0 B
Nov 24 17:44:58 np0005533938.novalocal dracut[1288]: Duration:       0.000547 seconds
Nov 24 17:44:58 np0005533938.novalocal dracut[1288]: *** Hardlinking files done ***
Nov 24 17:44:59 np0005533938.novalocal dracut[1288]: *** Creating initramfs image file '/boot/initramfs-5.14.0-639.el9.x86_64kdump.img' done ***
Nov 24 17:44:59 np0005533938.novalocal kdumpctl[1021]: kdump: kexec: loaded kdump kernel
Nov 24 17:44:59 np0005533938.novalocal kdumpctl[1021]: kdump: Starting kdump: [OK]
Nov 24 17:44:59 np0005533938.novalocal systemd[1]: Finished Crash recovery kernel arming.
Nov 24 17:44:59 np0005533938.novalocal systemd[1]: Startup finished in 1.587s (kernel) + 3.303s (initrd) + 26.491s (userspace) = 31.382s.
Nov 24 17:45:01 np0005533938.novalocal sshd-session[4298]: Accepted publickey for zuul from 38.102.83.114 port 36548 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Nov 24 17:45:01 np0005533938.novalocal systemd[1]: Created slice User Slice of UID 1000.
Nov 24 17:45:01 np0005533938.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 24 17:45:01 np0005533938.novalocal systemd-logind[822]: New session 1 of user zuul.
Nov 24 17:45:01 np0005533938.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 24 17:45:01 np0005533938.novalocal systemd[1]: Starting User Manager for UID 1000...
Nov 24 17:45:01 np0005533938.novalocal systemd[4302]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 17:45:01 np0005533938.novalocal systemd[4302]: Queued start job for default target Main User Target.
Nov 24 17:45:01 np0005533938.novalocal systemd[4302]: Created slice User Application Slice.
Nov 24 17:45:01 np0005533938.novalocal systemd[4302]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 24 17:45:01 np0005533938.novalocal systemd[4302]: Started Daily Cleanup of User's Temporary Directories.
Nov 24 17:45:01 np0005533938.novalocal systemd[4302]: Reached target Paths.
Nov 24 17:45:01 np0005533938.novalocal systemd[4302]: Reached target Timers.
Nov 24 17:45:01 np0005533938.novalocal systemd[4302]: Starting D-Bus User Message Bus Socket...
Nov 24 17:45:01 np0005533938.novalocal systemd[4302]: Starting Create User's Volatile Files and Directories...
Nov 24 17:45:01 np0005533938.novalocal systemd[4302]: Finished Create User's Volatile Files and Directories.
Nov 24 17:45:01 np0005533938.novalocal systemd[4302]: Listening on D-Bus User Message Bus Socket.
Nov 24 17:45:01 np0005533938.novalocal systemd[4302]: Reached target Sockets.
Nov 24 17:45:01 np0005533938.novalocal systemd[4302]: Reached target Basic System.
Nov 24 17:45:01 np0005533938.novalocal systemd[4302]: Reached target Main User Target.
Nov 24 17:45:01 np0005533938.novalocal systemd[4302]: Startup finished in 141ms.
Nov 24 17:45:01 np0005533938.novalocal systemd[1]: Started User Manager for UID 1000.
Nov 24 17:45:01 np0005533938.novalocal systemd[1]: Started Session 1 of User zuul.
Nov 24 17:45:01 np0005533938.novalocal sshd-session[4298]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 17:45:01 np0005533938.novalocal python3[4385]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 17:45:06 np0005533938.novalocal python3[4413]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 17:45:12 np0005533938.novalocal python3[4471]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 17:45:12 np0005533938.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 24 17:45:13 np0005533938.novalocal python3[4513]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 24 17:45:15 np0005533938.novalocal python3[4539]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvZPrKgB89mfwS2oik8tzBHyaRlPXyTumbN2XjYTIM9I73V/FHQIy+XgadtbvJmQdYv8gh5HHJ/ClxLAoQ9aQF+mvRKNNs1jSgMJUqsMhPN6puT4ggC46WGm2cz7KmzKpsB0ShzjCEx+MnmeM3wyA9Qhj49wWd31woFFaZ0yOVerGO1NVQlk/OPG/73EZkgrw/yGDomLqV0TCVSy3AhPNg5NtRbQiteODSSbZVl1auSX9PwM/eoz9P0tZMrIFOrXEd1QpVvERhc48M4e8edGTP8GQI4cSCyvKKG53gcEcBzpMbfnQtx4DKICDQxx6CHUC08XioN/xg1GDke+lh7jFrHL37m3oI2k55is36NYx0S3pSY+f6DLn6SiNGX8TaDALHvruYJmRuLFKa/olWFbLiJzfBaW9cpTWGHEkDqqpm7EbWP7Dy8VYQf2ziK+vtM8QLvT7ulXgdFRF9k5sR0YY7NIMeo/48c+v/ONPoP9lLYshkXYLCbff8PwGxgkN39aM= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 17:45:16 np0005533938.novalocal python3[4563]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 17:45:18 np0005533938.novalocal python3[4662]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 17:45:18 np0005533938.novalocal python3[4733]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764006317.931667-207-236949643964642/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=ff74cb70154b44fbadb80a19812dfd3c_id_rsa follow=False checksum=b939cc582dbc8d0d1a8ac7a9137f32beb4d349b2 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 17:45:19 np0005533938.novalocal python3[4856]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 17:45:19 np0005533938.novalocal python3[4927]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764006318.8711755-240-33848187972576/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=ff74cb70154b44fbadb80a19812dfd3c_id_rsa.pub follow=False checksum=fbea10b760c4c20f6233311d514933ea718dc471 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 17:45:20 np0005533938.novalocal python3[4975]: ansible-ping Invoked with data=pong
Nov 24 17:45:21 np0005533938.novalocal python3[4999]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 17:45:23 np0005533938.novalocal python3[5057]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 24 17:45:24 np0005533938.novalocal python3[5089]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 17:45:24 np0005533938.novalocal python3[5113]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 17:45:24 np0005533938.novalocal python3[5137]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 17:45:25 np0005533938.novalocal python3[5161]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 17:45:25 np0005533938.novalocal python3[5185]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 17:45:25 np0005533938.novalocal python3[5209]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 17:45:27 np0005533938.novalocal sudo[5233]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjpxqvvfgeezbgeccxcyivxrsveljbfw ; /usr/bin/python3'
Nov 24 17:45:27 np0005533938.novalocal sudo[5233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:45:27 np0005533938.novalocal python3[5235]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 17:45:27 np0005533938.novalocal sudo[5233]: pam_unix(sudo:session): session closed for user root
Nov 24 17:45:27 np0005533938.novalocal sudo[5311]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awwkructumiicbgwcucxslukfizgzzta ; /usr/bin/python3'
Nov 24 17:45:27 np0005533938.novalocal sudo[5311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:45:27 np0005533938.novalocal python3[5313]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 17:45:27 np0005533938.novalocal sudo[5311]: pam_unix(sudo:session): session closed for user root
Nov 24 17:45:28 np0005533938.novalocal sudo[5384]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seiyhvendyjzdyoflfhpkydyztqpoxsf ; /usr/bin/python3'
Nov 24 17:45:28 np0005533938.novalocal sudo[5384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:45:28 np0005533938.novalocal python3[5386]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764006327.4731293-21-167780810695481/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 17:45:28 np0005533938.novalocal sudo[5384]: pam_unix(sudo:session): session closed for user root
Nov 24 17:45:28 np0005533938.novalocal python3[5434]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 17:45:29 np0005533938.novalocal python3[5458]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 17:45:29 np0005533938.novalocal python3[5482]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 17:45:29 np0005533938.novalocal python3[5506]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 17:45:30 np0005533938.novalocal python3[5530]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 17:45:30 np0005533938.novalocal python3[5554]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 17:45:30 np0005533938.novalocal python3[5578]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 17:45:31 np0005533938.novalocal python3[5602]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 17:45:31 np0005533938.novalocal python3[5626]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 17:45:31 np0005533938.novalocal python3[5650]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 17:45:32 np0005533938.novalocal python3[5674]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 17:45:32 np0005533938.novalocal python3[5698]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 17:45:32 np0005533938.novalocal python3[5722]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 17:45:33 np0005533938.novalocal python3[5746]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 17:45:33 np0005533938.novalocal python3[5770]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 17:45:33 np0005533938.novalocal python3[5794]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 17:45:34 np0005533938.novalocal python3[5818]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 17:45:34 np0005533938.novalocal python3[5842]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 17:45:34 np0005533938.novalocal python3[5866]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 17:45:34 np0005533938.novalocal python3[5890]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 17:45:35 np0005533938.novalocal python3[5914]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 17:45:35 np0005533938.novalocal python3[5938]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 17:45:35 np0005533938.novalocal python3[5962]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 17:45:36 np0005533938.novalocal python3[5986]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 17:45:36 np0005533938.novalocal python3[6010]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 17:45:36 np0005533938.novalocal python3[6034]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 17:45:37 np0005533938.novalocal sshd-session[6035]: Invalid user  from 65.49.1.152 port 37227
Nov 24 17:45:38 np0005533938.novalocal sudo[6060]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxsgrjopamhllublgrwigucbvtluzeur ; /usr/bin/python3'
Nov 24 17:45:38 np0005533938.novalocal sudo[6060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:45:39 np0005533938.novalocal python3[6062]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 24 17:45:39 np0005533938.novalocal systemd[1]: Starting Time & Date Service...
Nov 24 17:45:39 np0005533938.novalocal systemd[1]: Started Time & Date Service.
Nov 24 17:45:39 np0005533938.novalocal systemd-timedated[6064]: Changed time zone to 'UTC' (UTC).
Nov 24 17:45:39 np0005533938.novalocal sudo[6060]: pam_unix(sudo:session): session closed for user root
Nov 24 17:45:40 np0005533938.novalocal sudo[6091]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqvdfegmpyaxrnnipmfnlbyyfuareqfr ; /usr/bin/python3'
Nov 24 17:45:40 np0005533938.novalocal sudo[6091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:45:40 np0005533938.novalocal python3[6093]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 17:45:40 np0005533938.novalocal sudo[6091]: pam_unix(sudo:session): session closed for user root
Nov 24 17:45:40 np0005533938.novalocal python3[6169]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 17:45:41 np0005533938.novalocal sshd-session[6035]: Connection closed by invalid user  65.49.1.152 port 37227 [preauth]
Nov 24 17:45:41 np0005533938.novalocal python3[6240]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764006340.6357918-153-1549162852287/source _original_basename=tmptgtfilkf follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 17:45:41 np0005533938.novalocal python3[6340]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 17:45:42 np0005533938.novalocal python3[6411]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764006341.627796-183-102437993283363/source _original_basename=tmp417pb5j9 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 17:45:42 np0005533938.novalocal sudo[6511]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sddxdgvcvgltduyqhbddfuqfjvkdcjji ; /usr/bin/python3'
Nov 24 17:45:42 np0005533938.novalocal sudo[6511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:45:43 np0005533938.novalocal python3[6513]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 17:45:43 np0005533938.novalocal sudo[6511]: pam_unix(sudo:session): session closed for user root
Nov 24 17:45:43 np0005533938.novalocal sudo[6584]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfvfbhrmipqiaghnsfsyaxcjzhqakvbn ; /usr/bin/python3'
Nov 24 17:45:43 np0005533938.novalocal sudo[6584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:45:43 np0005533938.novalocal python3[6586]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764006342.7206419-231-48848782624452/source _original_basename=tmpk1y9k6v4 follow=False checksum=e37e58be433a53918a64d1ef12dfc1e7d01516d0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 17:45:43 np0005533938.novalocal sudo[6584]: pam_unix(sudo:session): session closed for user root
Nov 24 17:45:44 np0005533938.novalocal python3[6634]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 17:45:44 np0005533938.novalocal python3[6660]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 17:45:44 np0005533938.novalocal sudo[6738]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlfrajepvvhjqremcspafnhglbfykatt ; /usr/bin/python3'
Nov 24 17:45:44 np0005533938.novalocal sudo[6738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:45:44 np0005533938.novalocal python3[6740]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 17:45:44 np0005533938.novalocal sudo[6738]: pam_unix(sudo:session): session closed for user root
Nov 24 17:45:44 np0005533938.novalocal sudo[6811]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alnmhatzgqdqmehiluozgdapbkmaxlpv ; /usr/bin/python3'
Nov 24 17:45:44 np0005533938.novalocal sudo[6811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:45:45 np0005533938.novalocal python3[6813]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764006344.46004-273-5557387571913/source _original_basename=tmppopzm06w follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 17:45:45 np0005533938.novalocal sudo[6811]: pam_unix(sudo:session): session closed for user root
Nov 24 17:45:45 np0005533938.novalocal sudo[6862]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzpvowakctiztcrpufwxzrckllcalubp ; /usr/bin/python3'
Nov 24 17:45:45 np0005533938.novalocal sudo[6862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:45:45 np0005533938.novalocal python3[6864]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-6218-5d16-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 17:45:45 np0005533938.novalocal sudo[6862]: pam_unix(sudo:session): session closed for user root
Nov 24 17:45:46 np0005533938.novalocal python3[6892]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-6218-5d16-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 24 17:45:47 np0005533938.novalocal python3[6920]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 17:46:04 np0005533938.novalocal sudo[6944]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofhmdkdmcnttrqzpknbguwhozizxwiqb ; /usr/bin/python3'
Nov 24 17:46:04 np0005533938.novalocal sudo[6944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:46:04 np0005533938.novalocal python3[6946]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 17:46:04 np0005533938.novalocal sudo[6944]: pam_unix(sudo:session): session closed for user root
Nov 24 17:46:09 np0005533938.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 24 17:46:39 np0005533938.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 24 17:46:39 np0005533938.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Nov 24 17:46:39 np0005533938.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 24 17:46:39 np0005533938.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 24 17:46:39 np0005533938.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Nov 24 17:46:39 np0005533938.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Nov 24 17:46:39 np0005533938.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Nov 24 17:46:39 np0005533938.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Nov 24 17:46:39 np0005533938.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Nov 24 17:46:39 np0005533938.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Nov 24 17:46:39 np0005533938.novalocal NetworkManager[860]: <info>  [1764006399.5545] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 24 17:46:39 np0005533938.novalocal systemd-udevd[6950]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 17:46:39 np0005533938.novalocal NetworkManager[860]: <info>  [1764006399.5760] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 17:46:39 np0005533938.novalocal NetworkManager[860]: <info>  [1764006399.5811] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 24 17:46:39 np0005533938.novalocal NetworkManager[860]: <info>  [1764006399.5818] device (eth1): carrier: link connected
Nov 24 17:46:39 np0005533938.novalocal NetworkManager[860]: <info>  [1764006399.5822] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 24 17:46:39 np0005533938.novalocal NetworkManager[860]: <info>  [1764006399.5834] policy: auto-activating connection 'Wired connection 1' (3cf5caf6-dae0-3e12-91e8-cbb71d516e93)
Nov 24 17:46:39 np0005533938.novalocal NetworkManager[860]: <info>  [1764006399.5841] device (eth1): Activation: starting connection 'Wired connection 1' (3cf5caf6-dae0-3e12-91e8-cbb71d516e93)
Nov 24 17:46:39 np0005533938.novalocal NetworkManager[860]: <info>  [1764006399.5843] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 17:46:39 np0005533938.novalocal NetworkManager[860]: <info>  [1764006399.5847] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 17:46:39 np0005533938.novalocal NetworkManager[860]: <info>  [1764006399.5853] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 17:46:39 np0005533938.novalocal NetworkManager[860]: <info>  [1764006399.5861] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 24 17:46:40 np0005533938.novalocal python3[6976]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-80b4-97a3-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 17:46:50 np0005533938.novalocal sudo[7054]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrzlziixqjumugttidtxdlmgnduvukhi ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 24 17:46:50 np0005533938.novalocal sudo[7054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:46:50 np0005533938.novalocal python3[7056]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 17:46:50 np0005533938.novalocal sudo[7054]: pam_unix(sudo:session): session closed for user root
Nov 24 17:46:50 np0005533938.novalocal sudo[7127]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocuartfsiwhdoaujiimiknoisvflxhlt ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 24 17:46:50 np0005533938.novalocal sudo[7127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:46:50 np0005533938.novalocal python3[7129]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764006410.1853683-102-155853294885160/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=d4c7d2f5197cc551e7b426198f8b8e1bde6a08c2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 17:46:50 np0005533938.novalocal sudo[7127]: pam_unix(sudo:session): session closed for user root
Nov 24 17:46:51 np0005533938.novalocal sudo[7177]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqaleawscygdfmwkzaorbmfezblwbjkl ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 24 17:46:51 np0005533938.novalocal sudo[7177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:46:51 np0005533938.novalocal python3[7179]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 17:46:51 np0005533938.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 24 17:46:51 np0005533938.novalocal systemd[1]: Stopped Network Manager Wait Online.
Nov 24 17:46:51 np0005533938.novalocal systemd[1]: Stopping Network Manager Wait Online...
Nov 24 17:46:51 np0005533938.novalocal systemd[1]: Stopping Network Manager...
Nov 24 17:46:51 np0005533938.novalocal NetworkManager[860]: <info>  [1764006411.9467] caught SIGTERM, shutting down normally.
Nov 24 17:46:51 np0005533938.novalocal NetworkManager[860]: <info>  [1764006411.9482] dhcp4 (eth0): canceled DHCP transaction
Nov 24 17:46:51 np0005533938.novalocal NetworkManager[860]: <info>  [1764006411.9482] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 17:46:51 np0005533938.novalocal NetworkManager[860]: <info>  [1764006411.9482] dhcp4 (eth0): state changed no lease
Nov 24 17:46:51 np0005533938.novalocal NetworkManager[860]: <info>  [1764006411.9487] manager: NetworkManager state is now CONNECTING
Nov 24 17:46:51 np0005533938.novalocal NetworkManager[860]: <info>  [1764006411.9680] dhcp4 (eth1): canceled DHCP transaction
Nov 24 17:46:51 np0005533938.novalocal NetworkManager[860]: <info>  [1764006411.9681] dhcp4 (eth1): state changed no lease
Nov 24 17:46:51 np0005533938.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 17:46:51 np0005533938.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[860]: <info>  [1764006412.4357] exiting (success)
Nov 24 17:46:52 np0005533938.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 24 17:46:52 np0005533938.novalocal systemd[1]: Stopped Network Manager.
Nov 24 17:46:52 np0005533938.novalocal systemd[1]: Starting Network Manager...
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.4940] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:c726fd3c-29d8-43c4-9498-0fb31e19789a)
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.4941] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.5007] manager[0x561fcfa46070]: monitoring kernel firmware directory '/lib/firmware'.
Nov 24 17:46:52 np0005533938.novalocal systemd[1]: Starting Hostname Service...
Nov 24 17:46:52 np0005533938.novalocal systemd[1]: Started Hostname Service.
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.5727] hostname: hostname: using hostnamed
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.5727] hostname: static hostname changed from (none) to "np0005533938.novalocal"
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.5734] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.5742] manager[0x561fcfa46070]: rfkill: Wi-Fi hardware radio set enabled
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.5742] manager[0x561fcfa46070]: rfkill: WWAN hardware radio set enabled
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.5779] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.5779] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.5780] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.5780] manager: Networking is enabled by state file
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.5783] settings: Loaded settings plugin: keyfile (internal)
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.5788] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.5818] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.5832] dhcp: init: Using DHCP client 'internal'
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.5835] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.5842] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.5849] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.5859] device (lo): Activation: starting connection 'lo' (5922deac-6043-4983-8df6-40dbc8abd7af)
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.5866] device (eth0): carrier: link connected
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.5871] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.5876] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.5876] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.5883] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.5890] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.5898] device (eth1): carrier: link connected
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.5902] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.5907] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (3cf5caf6-dae0-3e12-91e8-cbb71d516e93) (indicated)
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.5908] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.5913] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.5921] device (eth1): Activation: starting connection 'Wired connection 1' (3cf5caf6-dae0-3e12-91e8-cbb71d516e93)
Nov 24 17:46:52 np0005533938.novalocal systemd[1]: Started Network Manager.
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.5940] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.5949] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.5956] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.5960] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.5965] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.5972] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.5977] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.5982] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.5991] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.6003] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.6009] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.6025] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.6030] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.6057] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.6063] dhcp4 (eth0): state changed new lease, address=38.102.83.27
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.6072] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.6083] device (lo): Activation: successful, device activated.
Nov 24 17:46:52 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006412.6104] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 24 17:46:52 np0005533938.novalocal systemd[1]: Starting Network Manager Wait Online...
Nov 24 17:46:52 np0005533938.novalocal sudo[7177]: pam_unix(sudo:session): session closed for user root
Nov 24 17:46:53 np0005533938.novalocal python3[7245]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-80b4-97a3-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 17:46:53 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006413.1094] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 24 17:46:53 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006413.1365] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 24 17:46:53 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006413.1369] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 24 17:46:53 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006413.1375] manager: NetworkManager state is now CONNECTED_SITE
Nov 24 17:46:53 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006413.1381] device (eth0): Activation: successful, device activated.
Nov 24 17:46:53 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006413.1391] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 24 17:47:03 np0005533938.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 17:47:22 np0005533938.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 24 17:47:38 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006458.3121] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 24 17:47:38 np0005533938.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 17:47:38 np0005533938.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 17:47:38 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006458.3413] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 24 17:47:38 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006458.3415] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 24 17:47:38 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006458.3421] device (eth1): Activation: successful, device activated.
Nov 24 17:47:38 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006458.3426] manager: startup complete
Nov 24 17:47:38 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006458.3428] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 24 17:47:38 np0005533938.novalocal NetworkManager[7196]: <warn>  [1764006458.3433] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 24 17:47:38 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006458.3441] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 24 17:47:38 np0005533938.novalocal systemd[1]: Finished Network Manager Wait Online.
Nov 24 17:47:38 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006458.3650] dhcp4 (eth1): canceled DHCP transaction
Nov 24 17:47:38 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006458.3651] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 24 17:47:38 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006458.3651] dhcp4 (eth1): state changed no lease
Nov 24 17:47:38 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006458.3666] policy: auto-activating connection 'ci-private-network' (730e1bbf-c4c7-52c0-85e9-2379c2b50bf6)
Nov 24 17:47:38 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006458.3670] device (eth1): Activation: starting connection 'ci-private-network' (730e1bbf-c4c7-52c0-85e9-2379c2b50bf6)
Nov 24 17:47:38 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006458.3671] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 17:47:38 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006458.3673] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 17:47:38 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006458.3679] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 17:47:38 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006458.3686] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 17:47:38 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006458.5140] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 17:47:38 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006458.5142] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 17:47:38 np0005533938.novalocal NetworkManager[7196]: <info>  [1764006458.5149] device (eth1): Activation: successful, device activated.
Nov 24 17:47:41 np0005533938.novalocal systemd[4302]: Starting Mark boot as successful...
Nov 24 17:47:41 np0005533938.novalocal systemd[4302]: Finished Mark boot as successful.
Nov 24 17:47:48 np0005533938.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 17:47:49 np0005533938.novalocal sudo[7368]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqckdrsnejjlswbldcrqtpuddpjdadtc ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 24 17:47:49 np0005533938.novalocal sudo[7368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:47:49 np0005533938.novalocal python3[7370]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 17:47:49 np0005533938.novalocal sudo[7368]: pam_unix(sudo:session): session closed for user root
Nov 24 17:47:50 np0005533938.novalocal sudo[7441]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdsjpfwyfzwhlltluzrobkcwxbxjegpf ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 24 17:47:50 np0005533938.novalocal sudo[7441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:47:50 np0005533938.novalocal python3[7443]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764006469.5237026-267-167324167131161/source _original_basename=tmpa77i20hx follow=False checksum=c553385c2e3b212f0e2dcf8c6aad3b5b766c5901 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 17:47:50 np0005533938.novalocal sudo[7441]: pam_unix(sudo:session): session closed for user root
Nov 24 17:48:50 np0005533938.novalocal sshd-session[4312]: Received disconnect from 38.102.83.114 port 36548:11: disconnected by user
Nov 24 17:48:50 np0005533938.novalocal sshd-session[4312]: Disconnected from user zuul 38.102.83.114 port 36548
Nov 24 17:48:50 np0005533938.novalocal sshd-session[4298]: pam_unix(sshd:session): session closed for user zuul
Nov 24 17:48:50 np0005533938.novalocal systemd-logind[822]: Session 1 logged out. Waiting for processes to exit.
Nov 24 17:50:41 np0005533938.novalocal systemd[4302]: Created slice User Background Tasks Slice.
Nov 24 17:50:41 np0005533938.novalocal systemd[4302]: Starting Cleanup of User's Temporary Files and Directories...
Nov 24 17:50:41 np0005533938.novalocal systemd[4302]: Finished Cleanup of User's Temporary Files and Directories.
Nov 24 17:53:49 np0005533938.novalocal sshd-session[7471]: Accepted publickey for zuul from 38.102.83.114 port 49050 ssh2: RSA SHA256:hSQOID5Ghp9Ra3Xg4ItfWrKou3AexdidDUUIPh+xbVY
Nov 24 17:53:49 np0005533938.novalocal systemd-logind[822]: New session 3 of user zuul.
Nov 24 17:53:49 np0005533938.novalocal systemd[1]: Started Session 3 of User zuul.
Nov 24 17:53:49 np0005533938.novalocal sshd-session[7471]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 17:53:49 np0005533938.novalocal sudo[7498]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhvmkrjfpgyimquhkbvteanbhbwrwrll ; /usr/bin/python3'
Nov 24 17:53:49 np0005533938.novalocal sudo[7498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:53:49 np0005533938.novalocal python3[7500]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-5362-b8bb-000000001cc8-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 17:53:49 np0005533938.novalocal sudo[7498]: pam_unix(sudo:session): session closed for user root
Nov 24 17:53:50 np0005533938.novalocal sudo[7527]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwaknbsjunpfqeohwkrnozpwgoxinhtv ; /usr/bin/python3'
Nov 24 17:53:50 np0005533938.novalocal sudo[7527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:53:50 np0005533938.novalocal python3[7529]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 17:53:50 np0005533938.novalocal sudo[7527]: pam_unix(sudo:session): session closed for user root
Nov 24 17:53:50 np0005533938.novalocal sudo[7553]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oglrerdqgefmrrdfchrcudbppwnnygte ; /usr/bin/python3'
Nov 24 17:53:50 np0005533938.novalocal sudo[7553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:53:50 np0005533938.novalocal python3[7555]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 17:53:50 np0005533938.novalocal sudo[7553]: pam_unix(sudo:session): session closed for user root
Nov 24 17:53:50 np0005533938.novalocal sudo[7579]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chlbvrtkjccjovtnvoihfasadczqloro ; /usr/bin/python3'
Nov 24 17:53:50 np0005533938.novalocal sudo[7579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:53:50 np0005533938.novalocal python3[7581]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 17:53:50 np0005533938.novalocal sudo[7579]: pam_unix(sudo:session): session closed for user root
Nov 24 17:53:51 np0005533938.novalocal sudo[7605]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmyplzxtcyaqouyzprqpnozneqhzonjv ; /usr/bin/python3'
Nov 24 17:53:51 np0005533938.novalocal sudo[7605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:53:51 np0005533938.novalocal python3[7607]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 17:53:51 np0005533938.novalocal sudo[7605]: pam_unix(sudo:session): session closed for user root
Nov 24 17:53:51 np0005533938.novalocal sudo[7631]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbsrstgkxnqomgafmmdgmbmmccgvrweq ; /usr/bin/python3'
Nov 24 17:53:51 np0005533938.novalocal sudo[7631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:53:51 np0005533938.novalocal python3[7633]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 17:53:51 np0005533938.novalocal sudo[7631]: pam_unix(sudo:session): session closed for user root
Nov 24 17:53:51 np0005533938.novalocal sudo[7709]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofdozkhriiamaizlpmzjyijloijumtzj ; /usr/bin/python3'
Nov 24 17:53:51 np0005533938.novalocal sudo[7709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:53:52 np0005533938.novalocal python3[7711]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 17:53:52 np0005533938.novalocal sudo[7709]: pam_unix(sudo:session): session closed for user root
Nov 24 17:53:52 np0005533938.novalocal sudo[7782]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mslgefqipioosckcwjnkklvshzezvitq ; /usr/bin/python3'
Nov 24 17:53:52 np0005533938.novalocal sudo[7782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:53:52 np0005533938.novalocal python3[7784]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764006831.803891-479-261587616630473/source _original_basename=tmpyjw3cgyw follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 17:53:52 np0005533938.novalocal sudo[7782]: pam_unix(sudo:session): session closed for user root
Nov 24 17:53:53 np0005533938.novalocal sudo[7832]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dazuxcqfsgcfhzafdqvpzppharyqohmx ; /usr/bin/python3'
Nov 24 17:53:53 np0005533938.novalocal sudo[7832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:53:53 np0005533938.novalocal python3[7834]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 17:53:53 np0005533938.novalocal systemd[1]: Reloading.
Nov 24 17:53:53 np0005533938.novalocal systemd-rc-local-generator[7853]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 17:53:53 np0005533938.novalocal sudo[7832]: pam_unix(sudo:session): session closed for user root
Nov 24 17:53:54 np0005533938.novalocal sudo[7889]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-laaexxzokhpchnbnrseiliiqascnuhvd ; /usr/bin/python3'
Nov 24 17:53:54 np0005533938.novalocal sudo[7889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:53:55 np0005533938.novalocal python3[7891]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 24 17:53:55 np0005533938.novalocal sudo[7889]: pam_unix(sudo:session): session closed for user root
Nov 24 17:53:55 np0005533938.novalocal sudo[7915]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tciuuqepmbtrxvhlorvkuxnaeevvgptl ; /usr/bin/python3'
Nov 24 17:53:55 np0005533938.novalocal sudo[7915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:53:55 np0005533938.novalocal python3[7917]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 17:53:55 np0005533938.novalocal sudo[7915]: pam_unix(sudo:session): session closed for user root
Nov 24 17:53:55 np0005533938.novalocal sudo[7943]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnhluoxvtlxgbervxrjbnqthqawnascz ; /usr/bin/python3'
Nov 24 17:53:55 np0005533938.novalocal sudo[7943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:53:55 np0005533938.novalocal python3[7945]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 17:53:55 np0005533938.novalocal sudo[7943]: pam_unix(sudo:session): session closed for user root
Nov 24 17:53:55 np0005533938.novalocal sudo[7971]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhdaaqnelxlgxxvikwosdejbizevqgdj ; /usr/bin/python3'
Nov 24 17:53:55 np0005533938.novalocal sudo[7971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:53:56 np0005533938.novalocal python3[7973]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 17:53:56 np0005533938.novalocal sudo[7971]: pam_unix(sudo:session): session closed for user root
Nov 24 17:53:56 np0005533938.novalocal sudo[7999]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abjjgwbfujoforcgtjtaknwprmkzzxbi ; /usr/bin/python3'
Nov 24 17:53:56 np0005533938.novalocal sudo[7999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:53:56 np0005533938.novalocal python3[8001]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 17:53:56 np0005533938.novalocal sudo[7999]: pam_unix(sudo:session): session closed for user root
Nov 24 17:53:56 np0005533938.novalocal python3[8028]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-5362-b8bb-000000001ccf-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 17:53:57 np0005533938.novalocal python3[8058]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 24 17:53:59 np0005533938.novalocal irqbalance[818]: Cannot change IRQ 26 affinity: Operation not permitted
Nov 24 17:53:59 np0005533938.novalocal irqbalance[818]: IRQ 26 affinity is now unmanaged
Nov 24 17:53:59 np0005533938.novalocal sshd-session[7474]: Connection closed by 38.102.83.114 port 49050
Nov 24 17:53:59 np0005533938.novalocal sshd-session[7471]: pam_unix(sshd:session): session closed for user zuul
Nov 24 17:53:59 np0005533938.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Nov 24 17:53:59 np0005533938.novalocal systemd[1]: session-3.scope: Consumed 4.268s CPU time.
Nov 24 17:53:59 np0005533938.novalocal systemd-logind[822]: Session 3 logged out. Waiting for processes to exit.
Nov 24 17:53:59 np0005533938.novalocal systemd-logind[822]: Removed session 3.
Nov 24 17:54:00 np0005533938.novalocal sshd-session[8063]: Accepted publickey for zuul from 38.102.83.114 port 42274 ssh2: RSA SHA256:hSQOID5Ghp9Ra3Xg4ItfWrKou3AexdidDUUIPh+xbVY
Nov 24 17:54:00 np0005533938.novalocal systemd-logind[822]: New session 4 of user zuul.
Nov 24 17:54:00 np0005533938.novalocal systemd[1]: Started Session 4 of User zuul.
Nov 24 17:54:00 np0005533938.novalocal sshd-session[8063]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 17:54:01 np0005533938.novalocal sudo[8090]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uteclbcircdvsfttzvchktovgddapmxx ; /usr/bin/python3'
Nov 24 17:54:01 np0005533938.novalocal sudo[8090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:54:01 np0005533938.novalocal python3[8092]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 24 17:54:26 np0005533938.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 24 17:54:26 np0005533938.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 17:54:26 np0005533938.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 24 17:54:26 np0005533938.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 17:54:26 np0005533938.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 24 17:54:26 np0005533938.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 17:54:26 np0005533938.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 17:54:26 np0005533938.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 17:54:38 np0005533938.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 24 17:54:38 np0005533938.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 17:54:38 np0005533938.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 24 17:54:38 np0005533938.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 17:54:38 np0005533938.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 24 17:54:38 np0005533938.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 17:54:38 np0005533938.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 17:54:38 np0005533938.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 17:54:50 np0005533938.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 24 17:54:50 np0005533938.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 17:54:50 np0005533938.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 24 17:54:50 np0005533938.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 17:54:50 np0005533938.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 24 17:54:50 np0005533938.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 17:54:50 np0005533938.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 17:54:50 np0005533938.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 17:54:54 np0005533938.novalocal setsebool[8159]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 24 17:54:54 np0005533938.novalocal setsebool[8159]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 24 17:55:07 np0005533938.novalocal kernel: SELinux:  Converting 388 SID table entries...
Nov 24 17:55:07 np0005533938.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 17:55:07 np0005533938.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 24 17:55:07 np0005533938.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 17:55:07 np0005533938.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 24 17:55:07 np0005533938.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 17:55:07 np0005533938.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 17:55:07 np0005533938.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 17:55:35 np0005533938.novalocal dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 24 17:55:35 np0005533938.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 17:55:35 np0005533938.novalocal systemd[1]: Starting man-db-cache-update.service...
Nov 24 17:55:35 np0005533938.novalocal systemd[1]: Reloading.
Nov 24 17:55:35 np0005533938.novalocal systemd-rc-local-generator[8911]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 17:55:35 np0005533938.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 17:55:39 np0005533938.novalocal irqbalance[818]: Cannot change IRQ 27 affinity: Operation not permitted
Nov 24 17:55:39 np0005533938.novalocal irqbalance[818]: IRQ 27 affinity is now unmanaged
Nov 24 17:55:42 np0005533938.novalocal sudo[8090]: pam_unix(sudo:session): session closed for user root
Nov 24 17:55:43 np0005533938.novalocal python3[11619]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163ec2-ffbe-600e-35cb-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 17:55:45 np0005533938.novalocal kernel: evm: overlay not supported
Nov 24 17:55:45 np0005533938.novalocal systemd[4302]: Starting D-Bus User Message Bus...
Nov 24 17:55:45 np0005533938.novalocal dbus-broker-launch[12345]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 24 17:55:45 np0005533938.novalocal dbus-broker-launch[12345]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 24 17:55:45 np0005533938.novalocal systemd[4302]: Started D-Bus User Message Bus.
Nov 24 17:55:45 np0005533938.novalocal dbus-broker-lau[12345]: Ready
Nov 24 17:55:45 np0005533938.novalocal systemd[4302]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 24 17:55:45 np0005533938.novalocal systemd[4302]: Created slice Slice /user.
Nov 24 17:55:45 np0005533938.novalocal systemd[4302]: podman-12029.scope: unit configures an IP firewall, but not running as root.
Nov 24 17:55:45 np0005533938.novalocal systemd[4302]: (This warning is only shown for the first unit using IP firewalling.)
Nov 24 17:55:45 np0005533938.novalocal systemd[4302]: Started podman-12029.scope.
Nov 24 17:55:45 np0005533938.novalocal systemd[4302]: Started podman-pause-96706e27.scope.
Nov 24 17:55:46 np0005533938.novalocal sudo[12708]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czdbohgcizgamvgvfwpvchbdzgxdsxsw ; /usr/bin/python3'
Nov 24 17:55:46 np0005533938.novalocal sudo[12708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:55:46 np0005533938.novalocal python3[12714]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.83:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.83:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 17:55:46 np0005533938.novalocal python3[12714]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Nov 24 17:55:46 np0005533938.novalocal sudo[12708]: pam_unix(sudo:session): session closed for user root
Nov 24 17:55:46 np0005533938.novalocal sshd-session[8066]: Connection closed by 38.102.83.114 port 42274
Nov 24 17:55:46 np0005533938.novalocal sshd-session[8063]: pam_unix(sshd:session): session closed for user zuul
Nov 24 17:55:46 np0005533938.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Nov 24 17:55:46 np0005533938.novalocal systemd[1]: session-4.scope: Consumed 1min 3.087s CPU time.
Nov 24 17:55:46 np0005533938.novalocal systemd-logind[822]: Session 4 logged out. Waiting for processes to exit.
Nov 24 17:55:46 np0005533938.novalocal systemd-logind[822]: Removed session 4.
Nov 24 17:55:58 np0005533938.novalocal sshd-session[16208]: Invalid user daniel from 185.156.73.233 port 39838
Nov 24 17:55:58 np0005533938.novalocal sshd-session[16208]: Connection closed by invalid user daniel 185.156.73.233 port 39838 [preauth]
Nov 24 17:56:04 np0005533938.novalocal sshd-session[18493]: Connection closed by 38.102.83.41 port 44286 [preauth]
Nov 24 17:56:04 np0005533938.novalocal sshd-session[18495]: Connection closed by 38.102.83.41 port 44294 [preauth]
Nov 24 17:56:04 np0005533938.novalocal sshd-session[18497]: Unable to negotiate with 38.102.83.41 port 44310: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Nov 24 17:56:04 np0005533938.novalocal sshd-session[18494]: Unable to negotiate with 38.102.83.41 port 44324: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Nov 24 17:56:04 np0005533938.novalocal sshd-session[18498]: Unable to negotiate with 38.102.83.41 port 44340: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Nov 24 17:56:08 np0005533938.novalocal sshd-session[19493]: Accepted publickey for zuul from 38.102.83.114 port 46368 ssh2: RSA SHA256:hSQOID5Ghp9Ra3Xg4ItfWrKou3AexdidDUUIPh+xbVY
Nov 24 17:56:08 np0005533938.novalocal systemd-logind[822]: New session 5 of user zuul.
Nov 24 17:56:08 np0005533938.novalocal systemd[1]: Started Session 5 of User zuul.
Nov 24 17:56:08 np0005533938.novalocal sshd-session[19493]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 17:56:08 np0005533938.novalocal python3[19582]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPyJsA0slNInOFW3vXIajO+Ycf+ai01xx9++d2jFL87iEIJu8FOEeXKZ3B71uNxaMGyjhpI3Hj56b8aVGnqE46E= zuul@np0005533937.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 17:56:08 np0005533938.novalocal sudo[19695]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amgrpuxuxhkezqbmrjiakarlvtzhrlwb ; /usr/bin/python3'
Nov 24 17:56:08 np0005533938.novalocal sudo[19695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:56:09 np0005533938.novalocal python3[19701]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPyJsA0slNInOFW3vXIajO+Ycf+ai01xx9++d2jFL87iEIJu8FOEeXKZ3B71uNxaMGyjhpI3Hj56b8aVGnqE46E= zuul@np0005533937.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 17:56:09 np0005533938.novalocal sudo[19695]: pam_unix(sudo:session): session closed for user root
Nov 24 17:56:09 np0005533938.novalocal sudo[19966]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfvjlejhmarbkhrttrcaehevmniktift ; /usr/bin/python3'
Nov 24 17:56:09 np0005533938.novalocal sudo[19966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:56:09 np0005533938.novalocal python3[19975]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005533938.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 24 17:56:10 np0005533938.novalocal useradd[20047]: new group: name=cloud-admin, GID=1002
Nov 24 17:56:10 np0005533938.novalocal useradd[20047]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Nov 24 17:56:10 np0005533938.novalocal sudo[19966]: pam_unix(sudo:session): session closed for user root
Nov 24 17:56:10 np0005533938.novalocal sudo[20206]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjkxvwecsmqjqjadrzqxcikvjyvbxvrl ; /usr/bin/python3'
Nov 24 17:56:10 np0005533938.novalocal sudo[20206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:56:10 np0005533938.novalocal python3[20213]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPyJsA0slNInOFW3vXIajO+Ycf+ai01xx9++d2jFL87iEIJu8FOEeXKZ3B71uNxaMGyjhpI3Hj56b8aVGnqE46E= zuul@np0005533937.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 17:56:10 np0005533938.novalocal sudo[20206]: pam_unix(sudo:session): session closed for user root
Nov 24 17:56:10 np0005533938.novalocal sudo[20399]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpkbsjglcuqtepjbfzcxmvklxiqhqnki ; /usr/bin/python3'
Nov 24 17:56:10 np0005533938.novalocal sudo[20399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:56:10 np0005533938.novalocal python3[20404]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 17:56:10 np0005533938.novalocal sudo[20399]: pam_unix(sudo:session): session closed for user root
Nov 24 17:56:11 np0005533938.novalocal sudo[20617]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhcivtsahpphtrawamtpgibbbgecyoau ; /usr/bin/python3'
Nov 24 17:56:11 np0005533938.novalocal sudo[20617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:56:11 np0005533938.novalocal python3[20625]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764006970.6174285-135-182784570807934/source _original_basename=tmpxy4406g3 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 17:56:11 np0005533938.novalocal sudo[20617]: pam_unix(sudo:session): session closed for user root
Nov 24 17:56:11 np0005533938.novalocal sudo[20855]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnbfoatevwapswgiyilqvtkvbnmswemi ; /usr/bin/python3'
Nov 24 17:56:11 np0005533938.novalocal sudo[20855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:56:12 np0005533938.novalocal python3[20857]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Nov 24 17:56:12 np0005533938.novalocal systemd[1]: Starting Hostname Service...
Nov 24 17:56:12 np0005533938.novalocal systemd[1]: Started Hostname Service.
Nov 24 17:56:12 np0005533938.novalocal systemd-hostnamed[20913]: Changed pretty hostname to 'compute-0'
Nov 24 17:56:12 compute-0 systemd-hostnamed[20913]: Hostname set to <compute-0> (static)
Nov 24 17:56:12 compute-0 NetworkManager[7196]: <info>  [1764006972.3904] hostname: static hostname changed from "np0005533938.novalocal" to "compute-0"
Nov 24 17:56:12 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 17:56:12 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 17:56:12 compute-0 sudo[20855]: pam_unix(sudo:session): session closed for user root
Nov 24 17:56:12 compute-0 sshd-session[19531]: Connection closed by 38.102.83.114 port 46368
Nov 24 17:56:12 compute-0 sshd-session[19493]: pam_unix(sshd:session): session closed for user zuul
Nov 24 17:56:12 compute-0 systemd[1]: session-5.scope: Deactivated successfully.
Nov 24 17:56:12 compute-0 systemd[1]: session-5.scope: Consumed 2.246s CPU time.
Nov 24 17:56:12 compute-0 systemd-logind[822]: Session 5 logged out. Waiting for processes to exit.
Nov 24 17:56:12 compute-0 systemd-logind[822]: Removed session 5.
Nov 24 17:56:22 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 17:56:39 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 17:56:39 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 24 17:56:39 compute-0 systemd[1]: man-db-cache-update.service: Consumed 55.551s CPU time.
Nov 24 17:56:39 compute-0 systemd[1]: run-r95d575a4792f44a4bf0a59703c9b3d3c.service: Deactivated successfully.
Nov 24 17:56:42 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 24 17:57:41 compute-0 systemd[1]: Starting dnf makecache...
Nov 24 17:57:42 compute-0 dnf[29918]: Failed determining last makecache time.
Nov 24 17:57:42 compute-0 dnf[29918]: CentOS Stream 9 - BaseOS                         23 kB/s | 7.3 kB     00:00
Nov 24 17:57:42 compute-0 dnf[29918]: CentOS Stream 9 - AppStream                      69 kB/s | 7.4 kB     00:00
Nov 24 17:57:42 compute-0 dnf[29918]: CentOS Stream 9 - CRB                            75 kB/s | 7.2 kB     00:00
Nov 24 17:57:43 compute-0 dnf[29918]: CentOS Stream 9 - Extras packages                73 kB/s | 8.3 kB     00:00
Nov 24 17:57:43 compute-0 dnf[29918]: Metadata cache created.
Nov 24 17:57:43 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 24 17:57:43 compute-0 systemd[1]: Finished dnf makecache.
Nov 24 17:59:41 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 24 17:59:41 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 24 17:59:41 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 24 17:59:41 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 24 17:59:51 compute-0 sshd-session[29927]: Accepted publickey for zuul from 38.102.83.41 port 45506 ssh2: RSA SHA256:hSQOID5Ghp9Ra3Xg4ItfWrKou3AexdidDUUIPh+xbVY
Nov 24 17:59:51 compute-0 systemd-logind[822]: New session 6 of user zuul.
Nov 24 17:59:51 compute-0 systemd[1]: Started Session 6 of User zuul.
Nov 24 17:59:51 compute-0 sshd-session[29927]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 17:59:52 compute-0 python3[30003]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 17:59:53 compute-0 sudo[30117]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuoyjmjmrujzlinvviioqpmcxzayqxqf ; /usr/bin/python3'
Nov 24 17:59:53 compute-0 sudo[30117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:59:53 compute-0 python3[30119]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 17:59:53 compute-0 sudo[30117]: pam_unix(sudo:session): session closed for user root
Nov 24 17:59:54 compute-0 sudo[30190]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpalyyuvxsshfpjgdtkhldayqsskcepl ; /usr/bin/python3'
Nov 24 17:59:54 compute-0 sudo[30190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:59:54 compute-0 python3[30192]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764007193.5245514-33756-72718121001795/source mode=0755 _original_basename=delorean.repo follow=False checksum=1830be8248976a7f714fb01ca8550e92dfc79ad2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 17:59:54 compute-0 sudo[30190]: pam_unix(sudo:session): session closed for user root
Nov 24 17:59:54 compute-0 sudo[30216]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umotitasnwbzjcaknrfrihcerxljsgko ; /usr/bin/python3'
Nov 24 17:59:54 compute-0 sudo[30216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:59:54 compute-0 python3[30218]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 17:59:54 compute-0 sudo[30216]: pam_unix(sudo:session): session closed for user root
Nov 24 17:59:54 compute-0 sudo[30289]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aonyypyhrfvdwrliftsvuezenlcvkxik ; /usr/bin/python3'
Nov 24 17:59:54 compute-0 sudo[30289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:59:54 compute-0 python3[30291]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764007193.5245514-33756-72718121001795/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 17:59:54 compute-0 sudo[30289]: pam_unix(sudo:session): session closed for user root
Nov 24 17:59:54 compute-0 sudo[30315]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eedcdbpwrvhebcrzlpwzqfblxfqtwqlf ; /usr/bin/python3'
Nov 24 17:59:54 compute-0 sudo[30315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:59:55 compute-0 python3[30317]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 17:59:55 compute-0 sudo[30315]: pam_unix(sudo:session): session closed for user root
Nov 24 17:59:55 compute-0 sudo[30388]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzuwfjmgkbckpkvktbycarbhgyenkjqn ; /usr/bin/python3'
Nov 24 17:59:55 compute-0 sudo[30388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:59:55 compute-0 python3[30390]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764007193.5245514-33756-72718121001795/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 17:59:55 compute-0 sudo[30388]: pam_unix(sudo:session): session closed for user root
Nov 24 17:59:55 compute-0 sudo[30414]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvwsgjhgklntojnhnplycycelqtftfwp ; /usr/bin/python3'
Nov 24 17:59:55 compute-0 sudo[30414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:59:55 compute-0 python3[30416]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 17:59:55 compute-0 sudo[30414]: pam_unix(sudo:session): session closed for user root
Nov 24 17:59:55 compute-0 sudo[30487]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgtkzrumcslcfkztvatetfktjvhusvye ; /usr/bin/python3'
Nov 24 17:59:55 compute-0 sudo[30487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:59:56 compute-0 python3[30489]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764007193.5245514-33756-72718121001795/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 17:59:56 compute-0 sudo[30487]: pam_unix(sudo:session): session closed for user root
Nov 24 17:59:56 compute-0 sudo[30513]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxswkevwsslwhwmhtwjwmzzjhbrvfopl ; /usr/bin/python3'
Nov 24 17:59:56 compute-0 sudo[30513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:59:56 compute-0 python3[30515]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 17:59:56 compute-0 sudo[30513]: pam_unix(sudo:session): session closed for user root
Nov 24 17:59:56 compute-0 sudo[30586]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtuzwtgfanmkrtwoinwamqbwwlrzkjgd ; /usr/bin/python3'
Nov 24 17:59:56 compute-0 sudo[30586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:59:56 compute-0 python3[30588]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764007193.5245514-33756-72718121001795/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 17:59:56 compute-0 sudo[30586]: pam_unix(sudo:session): session closed for user root
Nov 24 17:59:56 compute-0 sudo[30612]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgtvephjvnbrprrjhrsuvtxnrzhpnagc ; /usr/bin/python3'
Nov 24 17:59:56 compute-0 sudo[30612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:59:56 compute-0 python3[30614]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 17:59:56 compute-0 sudo[30612]: pam_unix(sudo:session): session closed for user root
Nov 24 17:59:57 compute-0 sudo[30685]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbadssiomgtvdulfollsktlfedsvwbla ; /usr/bin/python3'
Nov 24 17:59:57 compute-0 sudo[30685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:59:57 compute-0 python3[30687]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764007193.5245514-33756-72718121001795/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 17:59:57 compute-0 sudo[30685]: pam_unix(sudo:session): session closed for user root
Nov 24 17:59:57 compute-0 sudo[30711]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbqgwgiwcqrnutqwqctkcgkamrsephmj ; /usr/bin/python3'
Nov 24 17:59:57 compute-0 sudo[30711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:59:57 compute-0 python3[30713]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 17:59:57 compute-0 sudo[30711]: pam_unix(sudo:session): session closed for user root
Nov 24 17:59:57 compute-0 sudo[30784]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkglwddvlxitufjkbfvgsxmsiaetcftl ; /usr/bin/python3'
Nov 24 17:59:57 compute-0 sudo[30784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 17:59:57 compute-0 python3[30786]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764007193.5245514-33756-72718121001795/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6646317362318a9831d66a1804f6bb7dd1b97cd5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 17:59:57 compute-0 sudo[30784]: pam_unix(sudo:session): session closed for user root
Nov 24 18:00:00 compute-0 sshd-session[30811]: Unable to negotiate with 192.168.122.11 port 44588: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Nov 24 18:00:00 compute-0 sshd-session[30812]: Connection closed by 192.168.122.11 port 44540 [preauth]
Nov 24 18:00:00 compute-0 sshd-session[30813]: Connection closed by 192.168.122.11 port 44554 [preauth]
Nov 24 18:00:00 compute-0 sshd-session[30814]: Unable to negotiate with 192.168.122.11 port 44570: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Nov 24 18:00:00 compute-0 sshd-session[30815]: Unable to negotiate with 192.168.122.11 port 44578: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Nov 24 18:00:09 compute-0 python3[30844]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:01:01 compute-0 CROND[30849]: (root) CMD (run-parts /etc/cron.hourly)
Nov 24 18:01:01 compute-0 run-parts[30852]: (/etc/cron.hourly) starting 0anacron
Nov 24 18:01:01 compute-0 anacron[30860]: Anacron started on 2025-11-24
Nov 24 18:01:01 compute-0 anacron[30860]: Will run job `cron.daily' in 28 min.
Nov 24 18:01:01 compute-0 anacron[30860]: Will run job `cron.weekly' in 48 min.
Nov 24 18:01:01 compute-0 anacron[30860]: Will run job `cron.monthly' in 68 min.
Nov 24 18:01:01 compute-0 anacron[30860]: Jobs will be executed sequentially
Nov 24 18:01:01 compute-0 run-parts[30862]: (/etc/cron.hourly) finished 0anacron
Nov 24 18:01:01 compute-0 CROND[30848]: (root) CMDEND (run-parts /etc/cron.hourly)
Nov 24 18:01:32 compute-0 sshd-session[30863]: Invalid user support from 78.128.112.74 port 51008
Nov 24 18:01:32 compute-0 sshd-session[30863]: Connection closed by invalid user support 78.128.112.74 port 51008 [preauth]
Nov 24 18:02:54 compute-0 sshd-session[30865]: Invalid user onlime_r from 80.94.95.116 port 19380
Nov 24 18:02:55 compute-0 sshd-session[30865]: Connection closed by invalid user onlime_r 80.94.95.116 port 19380 [preauth]
Nov 24 18:05:09 compute-0 sshd-session[29930]: Received disconnect from 38.102.83.41 port 45506:11: disconnected by user
Nov 24 18:05:09 compute-0 sshd-session[29930]: Disconnected from user zuul 38.102.83.41 port 45506
Nov 24 18:05:09 compute-0 sshd-session[29927]: pam_unix(sshd:session): session closed for user zuul
Nov 24 18:05:09 compute-0 systemd-logind[822]: Session 6 logged out. Waiting for processes to exit.
Nov 24 18:05:09 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Nov 24 18:05:09 compute-0 systemd[1]: session-6.scope: Consumed 4.426s CPU time.
Nov 24 18:05:09 compute-0 systemd-logind[822]: Removed session 6.
Nov 24 18:10:47 compute-0 sshd-session[30870]: Accepted publickey for zuul from 192.168.122.30 port 49598 ssh2: ECDSA SHA256:jrbRhTO0vQ5011EeK1ZbrK2vK+fcQyIDL9kUBqDHBYY
Nov 24 18:10:47 compute-0 systemd-logind[822]: New session 7 of user zuul.
Nov 24 18:10:47 compute-0 systemd[1]: Started Session 7 of User zuul.
Nov 24 18:10:47 compute-0 sshd-session[30870]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 18:10:48 compute-0 python3.9[31023]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:10:49 compute-0 sudo[31202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcpentuqtzjykfclslqorujqxnjnqlef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764007849.3636994-32-121948818298565/AnsiballZ_command.py'
Nov 24 18:10:49 compute-0 sudo[31202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:10:49 compute-0 python3.9[31204]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:10:57 compute-0 sudo[31202]: pam_unix(sudo:session): session closed for user root
Nov 24 18:10:58 compute-0 sshd-session[30873]: Connection closed by 192.168.122.30 port 49598
Nov 24 18:10:58 compute-0 sshd-session[30870]: pam_unix(sshd:session): session closed for user zuul
Nov 24 18:10:58 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Nov 24 18:10:58 compute-0 systemd[1]: session-7.scope: Consumed 7.648s CPU time.
Nov 24 18:10:58 compute-0 systemd-logind[822]: Session 7 logged out. Waiting for processes to exit.
Nov 24 18:10:58 compute-0 systemd-logind[822]: Removed session 7.
Nov 24 18:11:13 compute-0 sshd-session[31262]: Accepted publickey for zuul from 192.168.122.30 port 33682 ssh2: ECDSA SHA256:jrbRhTO0vQ5011EeK1ZbrK2vK+fcQyIDL9kUBqDHBYY
Nov 24 18:11:13 compute-0 systemd-logind[822]: New session 8 of user zuul.
Nov 24 18:11:13 compute-0 systemd[1]: Started Session 8 of User zuul.
Nov 24 18:11:13 compute-0 sshd-session[31262]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 18:11:14 compute-0 python3.9[31415]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 24 18:11:15 compute-0 python3.9[31589]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:11:16 compute-0 sudo[31739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lndogypxzwuniaylhiyhvsgsehhjutwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764007875.8411613-45-271797278956033/AnsiballZ_command.py'
Nov 24 18:11:16 compute-0 sudo[31739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:11:16 compute-0 python3.9[31741]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:11:16 compute-0 sudo[31739]: pam_unix(sudo:session): session closed for user root
Nov 24 18:11:18 compute-0 sudo[31893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lspvugfudpdsqziwqgkcrrerrtshiomn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764007877.72199-57-249360282815403/AnsiballZ_stat.py'
Nov 24 18:11:18 compute-0 sudo[31893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:11:18 compute-0 python3.9[31895]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:11:18 compute-0 sudo[31893]: pam_unix(sudo:session): session closed for user root
Nov 24 18:11:18 compute-0 sudo[32045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlyymizovbptdwsimgysntuacxofprvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764007878.420731-65-196509787616114/AnsiballZ_file.py'
Nov 24 18:11:18 compute-0 sudo[32045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:11:19 compute-0 python3.9[32047]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:11:19 compute-0 sudo[32045]: pam_unix(sudo:session): session closed for user root
Nov 24 18:11:19 compute-0 sudo[32197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehdoghhvbqpzgcztikpawwjceutvtebj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764007879.19873-73-277902729656581/AnsiballZ_stat.py'
Nov 24 18:11:19 compute-0 sudo[32197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:11:19 compute-0 python3.9[32199]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:11:19 compute-0 sudo[32197]: pam_unix(sudo:session): session closed for user root
Nov 24 18:11:20 compute-0 sudo[32320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbqdsetkyuveciomfwkdszyaixogpaeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764007879.19873-73-277902729656581/AnsiballZ_copy.py'
Nov 24 18:11:20 compute-0 sudo[32320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:11:20 compute-0 python3.9[32322]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764007879.19873-73-277902729656581/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:11:20 compute-0 sudo[32320]: pam_unix(sudo:session): session closed for user root
Nov 24 18:11:20 compute-0 sudo[32472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjmznjpymqiewsxqenfeutaavabwxmvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764007880.5860062-88-221028859730199/AnsiballZ_setup.py'
Nov 24 18:11:20 compute-0 sudo[32472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:11:21 compute-0 python3.9[32474]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:11:21 compute-0 sudo[32472]: pam_unix(sudo:session): session closed for user root
Nov 24 18:11:21 compute-0 sudo[32628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqhgurggzjwqkcjxmdhqfsbmhqnglhjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764007881.5648046-96-120166933490903/AnsiballZ_file.py'
Nov 24 18:11:21 compute-0 sudo[32628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:11:22 compute-0 python3.9[32630]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:11:22 compute-0 sudo[32628]: pam_unix(sudo:session): session closed for user root
Nov 24 18:11:22 compute-0 sudo[32780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpackssnqpgjlordrpeopilcxvxcoazh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764007882.186236-105-246731886379770/AnsiballZ_file.py'
Nov 24 18:11:22 compute-0 sudo[32780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:11:22 compute-0 python3.9[32782]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:11:22 compute-0 sudo[32780]: pam_unix(sudo:session): session closed for user root
Nov 24 18:11:23 compute-0 python3.9[32932]: ansible-ansible.builtin.service_facts Invoked
Nov 24 18:11:26 compute-0 python3.9[33185]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:11:27 compute-0 python3.9[33335]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:11:28 compute-0 python3.9[33489]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:11:28 compute-0 sudo[33645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtgwkirwdvnpitpmjvdbypfcdnsuibrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764007888.5617123-153-128544951476799/AnsiballZ_setup.py'
Nov 24 18:11:28 compute-0 sudo[33645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:11:29 compute-0 python3.9[33647]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 18:11:29 compute-0 sudo[33645]: pam_unix(sudo:session): session closed for user root
Nov 24 18:11:29 compute-0 sudo[33729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zctjxvnngvxboifblqyliqdnlfmjcozd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764007888.5617123-153-128544951476799/AnsiballZ_dnf.py'
Nov 24 18:11:29 compute-0 sudo[33729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:11:30 compute-0 python3.9[33731]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 18:12:16 compute-0 systemd[1]: Reloading.
Nov 24 18:12:16 compute-0 systemd-rc-local-generator[33929]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:12:16 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 24 18:12:16 compute-0 systemd[1]: Reloading.
Nov 24 18:12:17 compute-0 systemd-rc-local-generator[33969]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:12:17 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 24 18:12:17 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 24 18:12:17 compute-0 systemd[1]: Reloading.
Nov 24 18:12:17 compute-0 systemd-rc-local-generator[34004]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:12:17 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 24 18:12:17 compute-0 dbus-broker-launch[812]: Noticed file-system modification, trigger reload.
Nov 24 18:12:17 compute-0 dbus-broker-launch[812]: Noticed file-system modification, trigger reload.
Nov 24 18:12:17 compute-0 dbus-broker-launch[812]: Noticed file-system modification, trigger reload.
Nov 24 18:13:20 compute-0 kernel: SELinux:  Converting 2718 SID table entries...
Nov 24 18:13:20 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 18:13:20 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 24 18:13:20 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 18:13:20 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 24 18:13:20 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 18:13:20 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 18:13:20 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 18:13:21 compute-0 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 24 18:13:21 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 18:13:21 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 24 18:13:21 compute-0 systemd[1]: Reloading.
Nov 24 18:13:21 compute-0 systemd-rc-local-generator[34325]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:13:21 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 18:13:21 compute-0 sudo[33729]: pam_unix(sudo:session): session closed for user root
Nov 24 18:13:22 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 18:13:22 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 24 18:13:22 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.069s CPU time.
Nov 24 18:13:22 compute-0 systemd[1]: run-reb42ff0518594decbb993a17ec7f9b20.service: Deactivated successfully.
Nov 24 18:13:22 compute-0 sudo[35231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yixtlofopvejrotkwslrhjurgdqpwlzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008002.0572088-165-158329087918774/AnsiballZ_command.py'
Nov 24 18:13:22 compute-0 sudo[35231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:13:22 compute-0 python3.9[35233]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:13:23 compute-0 sudo[35231]: pam_unix(sudo:session): session closed for user root
Nov 24 18:13:24 compute-0 sudo[35513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcnojhfvqfgkypjhlpiunxafppuqdiri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008003.5610604-173-151472973873116/AnsiballZ_selinux.py'
Nov 24 18:13:24 compute-0 sudo[35513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:13:24 compute-0 python3.9[35515]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 24 18:13:24 compute-0 sudo[35513]: pam_unix(sudo:session): session closed for user root
Nov 24 18:13:25 compute-0 sudo[35665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sstagpbyvgmazqpcddehcfpcjyzaixse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008004.7917962-184-217267818563105/AnsiballZ_command.py'
Nov 24 18:13:25 compute-0 sudo[35665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:13:25 compute-0 python3.9[35667]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 24 18:13:26 compute-0 sudo[35665]: pam_unix(sudo:session): session closed for user root
Nov 24 18:13:26 compute-0 sudo[35818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axgdvxblywdtfakdklmhyzbyitorviko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008006.525771-192-120650282562813/AnsiballZ_file.py'
Nov 24 18:13:26 compute-0 sudo[35818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:13:27 compute-0 python3.9[35820]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:13:27 compute-0 sudo[35818]: pam_unix(sudo:session): session closed for user root
Nov 24 18:13:28 compute-0 sudo[35970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgdnwzfkpkvlvasroargoyovdxpqzyby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008007.740571-200-202669251039764/AnsiballZ_mount.py'
Nov 24 18:13:28 compute-0 sudo[35970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:13:28 compute-0 python3.9[35972]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 24 18:13:28 compute-0 sudo[35970]: pam_unix(sudo:session): session closed for user root
Nov 24 18:13:29 compute-0 sudo[36122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmhukbyalgioalrhgyuhjluovxmfgqqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008009.2072845-228-85383961792222/AnsiballZ_file.py'
Nov 24 18:13:29 compute-0 sudo[36122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:13:29 compute-0 python3.9[36124]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:13:29 compute-0 sudo[36122]: pam_unix(sudo:session): session closed for user root
Nov 24 18:13:30 compute-0 sudo[36274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-albffgbravyheyrgeaqjxnpsgyaxmjzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008009.8179655-236-83641516892809/AnsiballZ_stat.py'
Nov 24 18:13:30 compute-0 sudo[36274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:13:30 compute-0 python3.9[36276]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:13:30 compute-0 sudo[36274]: pam_unix(sudo:session): session closed for user root
Nov 24 18:13:30 compute-0 sudo[36397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyygulwilbscugudzaqphahfqgbmiekc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008009.8179655-236-83641516892809/AnsiballZ_copy.py'
Nov 24 18:13:30 compute-0 sudo[36397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:13:30 compute-0 python3.9[36399]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764008009.8179655-236-83641516892809/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=4453bc72f5dea8ea952ecd01786d1a0544923cc0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:13:30 compute-0 sudo[36397]: pam_unix(sudo:session): session closed for user root
Nov 24 18:13:31 compute-0 sudo[36549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfdilgfumodvsronzaxnjzcvdqecmlsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008011.2868962-260-29730986021924/AnsiballZ_stat.py'
Nov 24 18:13:31 compute-0 sudo[36549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:13:31 compute-0 python3.9[36551]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:13:31 compute-0 sudo[36549]: pam_unix(sudo:session): session closed for user root
Nov 24 18:13:32 compute-0 sudo[36701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsxofvuknksoveitzewapvuvfchxnrar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008011.9260628-268-160944145598862/AnsiballZ_command.py'
Nov 24 18:13:32 compute-0 sudo[36701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:13:34 compute-0 sshd-session[36704]: Connection closed by authenticating user root 80.94.95.115 port 17452 [preauth]
Nov 24 18:13:35 compute-0 python3.9[36703]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:13:35 compute-0 sudo[36701]: pam_unix(sudo:session): session closed for user root
Nov 24 18:13:35 compute-0 sudo[36856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmkraovvhimhkjkjyzyfcsjbuswggchz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008015.4011292-276-86338253916351/AnsiballZ_file.py'
Nov 24 18:13:35 compute-0 sudo[36856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:13:35 compute-0 python3.9[36858]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:13:35 compute-0 sudo[36856]: pam_unix(sudo:session): session closed for user root
Nov 24 18:13:36 compute-0 sudo[37008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxlyzocuzxzxkllyfkdbagvxplkwgulw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008016.2529428-287-44598752135911/AnsiballZ_getent.py'
Nov 24 18:13:36 compute-0 sudo[37008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:13:36 compute-0 python3.9[37010]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 24 18:13:36 compute-0 sudo[37008]: pam_unix(sudo:session): session closed for user root
Nov 24 18:13:36 compute-0 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 18:13:37 compute-0 sudo[37162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhalanptwrtheegilfpvgxbklfycxdnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008016.9866424-295-173353072490796/AnsiballZ_group.py'
Nov 24 18:13:37 compute-0 sudo[37162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:13:37 compute-0 python3.9[37164]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 18:13:37 compute-0 groupadd[37165]: group added to /etc/group: name=qemu, GID=107
Nov 24 18:13:37 compute-0 groupadd[37165]: group added to /etc/gshadow: name=qemu
Nov 24 18:13:37 compute-0 groupadd[37165]: new group: name=qemu, GID=107
Nov 24 18:13:37 compute-0 sudo[37162]: pam_unix(sudo:session): session closed for user root
Nov 24 18:13:38 compute-0 sudo[37320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gapafftjtuusgbwrpdyfjtstgvmgsdzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008017.766634-303-246242185773488/AnsiballZ_user.py'
Nov 24 18:13:38 compute-0 sudo[37320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:13:38 compute-0 python3.9[37322]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 24 18:13:38 compute-0 useradd[37324]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Nov 24 18:13:38 compute-0 sudo[37320]: pam_unix(sudo:session): session closed for user root
Nov 24 18:13:39 compute-0 sudo[37480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gomqewaghemvaxdjuyxifebswelzotps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008018.8226964-311-224675658958562/AnsiballZ_getent.py'
Nov 24 18:13:39 compute-0 sudo[37480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:13:39 compute-0 python3.9[37482]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 24 18:13:39 compute-0 sudo[37480]: pam_unix(sudo:session): session closed for user root
Nov 24 18:13:39 compute-0 sudo[37633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjegrxxdfttvnmwfafqplwqrxgmofrax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008019.4560773-319-27785099618773/AnsiballZ_group.py'
Nov 24 18:13:39 compute-0 sudo[37633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:13:39 compute-0 python3.9[37635]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 18:13:39 compute-0 groupadd[37636]: group added to /etc/group: name=hugetlbfs, GID=42477
Nov 24 18:13:39 compute-0 groupadd[37636]: group added to /etc/gshadow: name=hugetlbfs
Nov 24 18:13:39 compute-0 groupadd[37636]: new group: name=hugetlbfs, GID=42477
Nov 24 18:13:39 compute-0 sudo[37633]: pam_unix(sudo:session): session closed for user root
Nov 24 18:13:40 compute-0 sudo[37791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrlpzetumkzmgwmhhzhmmotavciljpft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008020.105856-328-251520284076999/AnsiballZ_file.py'
Nov 24 18:13:40 compute-0 sudo[37791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:13:40 compute-0 python3.9[37793]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 24 18:13:40 compute-0 sudo[37791]: pam_unix(sudo:session): session closed for user root
Nov 24 18:13:41 compute-0 sudo[37943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uosgtytiasbnhihbgsgjlnvndgrjhwrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008020.8610032-339-52803889811870/AnsiballZ_dnf.py'
Nov 24 18:13:41 compute-0 sudo[37943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:13:41 compute-0 python3.9[37945]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 18:13:42 compute-0 sudo[37943]: pam_unix(sudo:session): session closed for user root
Nov 24 18:13:43 compute-0 sudo[38097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhtnbfojaihzttnecgbajtbxlpnscbtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008023.0521421-347-133151082796183/AnsiballZ_file.py'
Nov 24 18:13:43 compute-0 sudo[38097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:13:43 compute-0 python3.9[38099]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:13:43 compute-0 sudo[38097]: pam_unix(sudo:session): session closed for user root
Nov 24 18:13:44 compute-0 sudo[38249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kufpooqkzbqgitdtqufobmoinqymxbrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008023.7231472-355-262813525443770/AnsiballZ_stat.py'
Nov 24 18:13:44 compute-0 sudo[38249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:13:44 compute-0 python3.9[38251]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:13:44 compute-0 sudo[38249]: pam_unix(sudo:session): session closed for user root
Nov 24 18:13:44 compute-0 sudo[38372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfxwgvltbfilereqipjkiawuolmndlzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008023.7231472-355-262813525443770/AnsiballZ_copy.py'
Nov 24 18:13:44 compute-0 sudo[38372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:13:44 compute-0 python3.9[38374]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764008023.7231472-355-262813525443770/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:13:44 compute-0 sudo[38372]: pam_unix(sudo:session): session closed for user root
Nov 24 18:13:45 compute-0 sudo[38524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rthuuwndpdkhkufyvttsvvmbcldwhvrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008025.0202658-370-213437102442044/AnsiballZ_systemd.py'
Nov 24 18:13:45 compute-0 sudo[38524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:13:45 compute-0 python3.9[38526]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 18:13:45 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 24 18:13:46 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 24 18:13:46 compute-0 kernel: Bridge firewalling registered
Nov 24 18:13:46 compute-0 systemd-modules-load[38530]: Inserted module 'br_netfilter'
Nov 24 18:13:46 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 24 18:13:46 compute-0 sudo[38524]: pam_unix(sudo:session): session closed for user root
Nov 24 18:13:46 compute-0 sudo[38684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puaibopnbmtxesflyhqjutxeffgwraar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008026.1679487-378-193958592371871/AnsiballZ_stat.py'
Nov 24 18:13:46 compute-0 sudo[38684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:13:46 compute-0 python3.9[38686]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:13:46 compute-0 sudo[38684]: pam_unix(sudo:session): session closed for user root
Nov 24 18:13:46 compute-0 sudo[38807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvgffcyyipmekbghfffhdabjvwuogahr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008026.1679487-378-193958592371871/AnsiballZ_copy.py'
Nov 24 18:13:46 compute-0 sudo[38807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:13:47 compute-0 python3.9[38809]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764008026.1679487-378-193958592371871/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:13:47 compute-0 sudo[38807]: pam_unix(sudo:session): session closed for user root
Nov 24 18:13:47 compute-0 sudo[38959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sisacuymncyawleadmcqfzwpfpvwbbwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008027.3660586-396-210250162172039/AnsiballZ_dnf.py'
Nov 24 18:13:47 compute-0 sudo[38959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:13:47 compute-0 python3.9[38961]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 18:13:50 compute-0 dbus-broker-launch[812]: Noticed file-system modification, trigger reload.
Nov 24 18:13:51 compute-0 dbus-broker-launch[812]: Noticed file-system modification, trigger reload.
Nov 24 18:13:51 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 18:13:51 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 24 18:13:51 compute-0 systemd[1]: Reloading.
Nov 24 18:13:51 compute-0 systemd-rc-local-generator[39025]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:13:51 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 18:13:51 compute-0 sudo[38959]: pam_unix(sudo:session): session closed for user root
Nov 24 18:13:52 compute-0 python3.9[40231]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:13:53 compute-0 python3.9[41371]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 24 18:13:54 compute-0 python3.9[42061]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:13:54 compute-0 sudo[42779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjexnstdkmgxluziyaeeaoplnoiiborg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008034.4408824-435-31845221199278/AnsiballZ_command.py'
Nov 24 18:13:54 compute-0 sudo[42779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:13:54 compute-0 python3.9[42797]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:13:55 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 24 18:13:55 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 18:13:55 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 24 18:13:55 compute-0 systemd[1]: man-db-cache-update.service: Consumed 4.829s CPU time.
Nov 24 18:13:55 compute-0 systemd[1]: run-re355a0c3f3f64fae8f5d5c04c3d54460.service: Deactivated successfully.
Nov 24 18:13:55 compute-0 systemd[1]: Starting Authorization Manager...
Nov 24 18:13:55 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 24 18:13:55 compute-0 polkitd[43339]: Started polkitd version 0.117
Nov 24 18:13:55 compute-0 polkitd[43339]: Loading rules from directory /etc/polkit-1/rules.d
Nov 24 18:13:55 compute-0 polkitd[43339]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 24 18:13:55 compute-0 polkitd[43339]: Finished loading, compiling and executing 2 rules
Nov 24 18:13:55 compute-0 systemd[1]: Started Authorization Manager.
Nov 24 18:13:55 compute-0 polkitd[43339]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Nov 24 18:13:55 compute-0 sudo[42779]: pam_unix(sudo:session): session closed for user root
Nov 24 18:13:56 compute-0 sudo[43507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eimnapzznolycwjrolhsatztlqcmqmsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008035.7893407-444-78632093208137/AnsiballZ_systemd.py'
Nov 24 18:13:56 compute-0 sudo[43507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:13:56 compute-0 python3.9[43509]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:13:56 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 24 18:13:56 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Nov 24 18:13:56 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 24 18:13:56 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 24 18:13:56 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 24 18:13:56 compute-0 sudo[43507]: pam_unix(sudo:session): session closed for user root
Nov 24 18:13:57 compute-0 python3.9[43671]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 24 18:13:59 compute-0 sudo[43821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxzmameiagvswdppakdxuipjziuhfqhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008039.010403-501-91408643196405/AnsiballZ_systemd.py'
Nov 24 18:13:59 compute-0 sudo[43821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:13:59 compute-0 python3.9[43823]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:13:59 compute-0 systemd[1]: Reloading.
Nov 24 18:13:59 compute-0 systemd-rc-local-generator[43853]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:13:59 compute-0 sudo[43821]: pam_unix(sudo:session): session closed for user root
Nov 24 18:14:00 compute-0 sudo[44010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yozuqjrfwfmmnntswjepdokgajcuexmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008039.906161-501-162832220551218/AnsiballZ_systemd.py'
Nov 24 18:14:00 compute-0 sudo[44010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:14:00 compute-0 python3.9[44012]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:14:00 compute-0 systemd[1]: Reloading.
Nov 24 18:14:00 compute-0 systemd-rc-local-generator[44042]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:14:00 compute-0 sudo[44010]: pam_unix(sudo:session): session closed for user root
Nov 24 18:14:01 compute-0 sudo[44199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvvekqgtkkbmybsnsjrbltxtwqwdonnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008040.898544-517-74221253895964/AnsiballZ_command.py'
Nov 24 18:14:01 compute-0 sudo[44199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:14:01 compute-0 python3.9[44201]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:14:01 compute-0 sudo[44199]: pam_unix(sudo:session): session closed for user root
Nov 24 18:14:01 compute-0 sudo[44352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcyjraexenqipmhqhblhvdvweimungaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008041.5600643-525-191360685101296/AnsiballZ_command.py'
Nov 24 18:14:01 compute-0 sudo[44352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:14:02 compute-0 python3.9[44354]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:14:02 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 24 18:14:02 compute-0 sudo[44352]: pam_unix(sudo:session): session closed for user root
Nov 24 18:14:02 compute-0 sudo[44505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgobsnhsmxuhgjoxhvipfjlssmeitnzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008042.223399-533-160673877510378/AnsiballZ_command.py'
Nov 24 18:14:02 compute-0 sudo[44505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:14:02 compute-0 python3.9[44507]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:14:03 compute-0 sudo[44505]: pam_unix(sudo:session): session closed for user root
Nov 24 18:14:04 compute-0 sudo[44667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnqlmgcgxgdgtrlanuupythsoquuxfun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008044.125635-541-273282559994423/AnsiballZ_command.py'
Nov 24 18:14:04 compute-0 sudo[44667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:14:04 compute-0 python3.9[44669]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:14:04 compute-0 sudo[44667]: pam_unix(sudo:session): session closed for user root
Nov 24 18:14:04 compute-0 sudo[44820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtbxqjbpwpjazxrgbvaithnnalkjnqzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008044.652626-549-250097727498620/AnsiballZ_systemd.py'
Nov 24 18:14:04 compute-0 sudo[44820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:14:05 compute-0 python3.9[44822]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 18:14:05 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 24 18:14:05 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Nov 24 18:14:05 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Nov 24 18:14:05 compute-0 systemd[1]: Starting Apply Kernel Variables...
Nov 24 18:14:05 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 24 18:14:05 compute-0 systemd[1]: Finished Apply Kernel Variables.
Nov 24 18:14:05 compute-0 sudo[44820]: pam_unix(sudo:session): session closed for user root
Nov 24 18:14:05 compute-0 sshd-session[31265]: Connection closed by 192.168.122.30 port 33682
Nov 24 18:14:05 compute-0 sshd-session[31262]: pam_unix(sshd:session): session closed for user zuul
Nov 24 18:14:05 compute-0 systemd-logind[822]: Session 8 logged out. Waiting for processes to exit.
Nov 24 18:14:05 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Nov 24 18:14:05 compute-0 systemd[1]: session-8.scope: Consumed 2min 7.797s CPU time.
Nov 24 18:14:05 compute-0 systemd-logind[822]: Removed session 8.
Nov 24 18:14:12 compute-0 sshd-session[44852]: Accepted publickey for zuul from 192.168.122.30 port 50300 ssh2: ECDSA SHA256:jrbRhTO0vQ5011EeK1ZbrK2vK+fcQyIDL9kUBqDHBYY
Nov 24 18:14:12 compute-0 systemd-logind[822]: New session 9 of user zuul.
Nov 24 18:14:12 compute-0 systemd[1]: Started Session 9 of User zuul.
Nov 24 18:14:12 compute-0 sshd-session[44852]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 18:14:13 compute-0 python3.9[45005]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:14:14 compute-0 sudo[45159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fooigpnilyfnihwifpbdwwpvzgugrisr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008054.186691-36-166930425226609/AnsiballZ_getent.py'
Nov 24 18:14:14 compute-0 sudo[45159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:14:14 compute-0 python3.9[45161]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 24 18:14:14 compute-0 sudo[45159]: pam_unix(sudo:session): session closed for user root
Nov 24 18:14:15 compute-0 sudo[45312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojwrwmzbmajigkjcpqtykpmokbbfgxer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008054.974303-44-262430552216392/AnsiballZ_group.py'
Nov 24 18:14:15 compute-0 sudo[45312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:14:15 compute-0 python3.9[45314]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 18:14:15 compute-0 groupadd[45315]: group added to /etc/group: name=openvswitch, GID=42476
Nov 24 18:14:15 compute-0 groupadd[45315]: group added to /etc/gshadow: name=openvswitch
Nov 24 18:14:15 compute-0 groupadd[45315]: new group: name=openvswitch, GID=42476
Nov 24 18:14:15 compute-0 sudo[45312]: pam_unix(sudo:session): session closed for user root
Nov 24 18:14:16 compute-0 sudo[45470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuqmowmasaijhtnnqjnesrixhxqxsmbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008055.7372272-52-275617001382956/AnsiballZ_user.py'
Nov 24 18:14:16 compute-0 sudo[45470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:14:16 compute-0 python3.9[45472]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 24 18:14:16 compute-0 useradd[45474]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Nov 24 18:14:16 compute-0 useradd[45474]: add 'openvswitch' to group 'hugetlbfs'
Nov 24 18:14:16 compute-0 useradd[45474]: add 'openvswitch' to shadow group 'hugetlbfs'
Nov 24 18:14:16 compute-0 sudo[45470]: pam_unix(sudo:session): session closed for user root
Nov 24 18:14:17 compute-0 sudo[45630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkewcaocnxrqhgiedfyooyedqenseegd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008056.7251844-62-65539466172526/AnsiballZ_setup.py'
Nov 24 18:14:17 compute-0 sudo[45630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:14:17 compute-0 python3.9[45632]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 18:14:17 compute-0 sudo[45630]: pam_unix(sudo:session): session closed for user root
Nov 24 18:14:17 compute-0 sudo[45714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cazkwneqyjkxfspjogpiljgtieivlrfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008056.7251844-62-65539466172526/AnsiballZ_dnf.py'
Nov 24 18:14:17 compute-0 sudo[45714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:14:18 compute-0 python3.9[45716]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 24 18:14:21 compute-0 sudo[45714]: pam_unix(sudo:session): session closed for user root
Nov 24 18:14:21 compute-0 sudo[45877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isnxkpvrgssweivsptntilmwzfehebcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008061.5028136-76-260964334264649/AnsiballZ_dnf.py'
Nov 24 18:14:21 compute-0 sudo[45877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:14:21 compute-0 python3.9[45879]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 18:14:32 compute-0 kernel: SELinux:  Converting 2730 SID table entries...
Nov 24 18:14:32 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 18:14:32 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 24 18:14:32 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 18:14:32 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 24 18:14:32 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 18:14:32 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 18:14:32 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 18:14:32 compute-0 groupadd[45902]: group added to /etc/group: name=unbound, GID=993
Nov 24 18:14:32 compute-0 groupadd[45902]: group added to /etc/gshadow: name=unbound
Nov 24 18:14:32 compute-0 groupadd[45902]: new group: name=unbound, GID=993
Nov 24 18:14:32 compute-0 useradd[45909]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Nov 24 18:14:32 compute-0 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 24 18:14:32 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 24 18:14:34 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 18:14:34 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 24 18:14:34 compute-0 systemd[1]: Reloading.
Nov 24 18:14:34 compute-0 systemd-rc-local-generator[46405]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:14:34 compute-0 systemd-sysv-generator[46409]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:14:34 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 18:14:34 compute-0 sudo[45877]: pam_unix(sudo:session): session closed for user root
Nov 24 18:14:35 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 18:14:35 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 24 18:14:35 compute-0 systemd[1]: run-r140e3154bdca4ca182f17759746d38cb.service: Deactivated successfully.
Nov 24 18:14:35 compute-0 sudo[46975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lerfgfmdpjnxqbymbajfmqmfldlkruox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008075.1272688-84-55169049561360/AnsiballZ_systemd.py'
Nov 24 18:14:35 compute-0 sudo[46975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:14:35 compute-0 python3.9[46977]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 18:14:36 compute-0 systemd[1]: Reloading.
Nov 24 18:14:36 compute-0 systemd-sysv-generator[47007]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:14:36 compute-0 systemd-rc-local-generator[47004]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:14:36 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Nov 24 18:14:36 compute-0 chown[47019]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 24 18:14:36 compute-0 ovs-ctl[47024]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 24 18:14:36 compute-0 ovs-ctl[47024]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 24 18:14:36 compute-0 ovs-ctl[47024]: Starting ovsdb-server [  OK  ]
Nov 24 18:14:36 compute-0 ovs-vsctl[47073]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 24 18:14:36 compute-0 ovs-vsctl[47093]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"302e9f34-0427-4ff9-a29b-2fc7b5250666\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 24 18:14:36 compute-0 ovs-ctl[47024]: Configuring Open vSwitch system IDs [  OK  ]
Nov 24 18:14:36 compute-0 ovs-vsctl[47099]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 24 18:14:36 compute-0 ovs-ctl[47024]: Enabling remote OVSDB managers [  OK  ]
Nov 24 18:14:36 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Nov 24 18:14:36 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 24 18:14:36 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 24 18:14:36 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 24 18:14:36 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Nov 24 18:14:36 compute-0 ovs-ctl[47143]: Inserting openvswitch module [  OK  ]
Nov 24 18:14:36 compute-0 ovs-ctl[47112]: Starting ovs-vswitchd [  OK  ]
Nov 24 18:14:36 compute-0 ovs-vsctl[47161]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 24 18:14:36 compute-0 ovs-ctl[47112]: Enabling remote OVSDB managers [  OK  ]
Nov 24 18:14:36 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 24 18:14:36 compute-0 systemd[1]: Starting Open vSwitch...
Nov 24 18:14:36 compute-0 systemd[1]: Finished Open vSwitch.
Nov 24 18:14:37 compute-0 sudo[46975]: pam_unix(sudo:session): session closed for user root
Nov 24 18:14:37 compute-0 python3.9[47312]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:14:38 compute-0 sudo[47462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbstxzwiqldkdquqliiatxrlnrogscwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008077.970629-102-202287080332834/AnsiballZ_sefcontext.py'
Nov 24 18:14:38 compute-0 sudo[47462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:14:38 compute-0 python3.9[47464]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 24 18:14:39 compute-0 kernel: SELinux:  Converting 2744 SID table entries...
Nov 24 18:14:39 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 18:14:39 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 24 18:14:39 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 18:14:39 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 24 18:14:39 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 18:14:39 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 18:14:39 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 18:14:39 compute-0 sudo[47462]: pam_unix(sudo:session): session closed for user root
Nov 24 18:14:40 compute-0 python3.9[47619]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:14:41 compute-0 sudo[47775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vazfjxmvhdncrgxxnrnzbtxufagrkhop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008081.099534-120-161183919945930/AnsiballZ_dnf.py'
Nov 24 18:14:41 compute-0 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 24 18:14:41 compute-0 sudo[47775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:14:41 compute-0 python3.9[47777]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 18:14:42 compute-0 sudo[47775]: pam_unix(sudo:session): session closed for user root
Nov 24 18:14:43 compute-0 sudo[47928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjwknyclshpvmtlrslkoinmsclutovtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008082.952099-128-126350888902063/AnsiballZ_command.py'
Nov 24 18:14:43 compute-0 sudo[47928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:14:43 compute-0 python3.9[47930]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:14:44 compute-0 sudo[47928]: pam_unix(sudo:session): session closed for user root
Nov 24 18:14:44 compute-0 sudo[48215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-joxssnmqnglaqnfkadopxgpaotraxvdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008084.3671665-136-207407000109902/AnsiballZ_file.py'
Nov 24 18:14:44 compute-0 sudo[48215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:14:45 compute-0 python3.9[48217]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 24 18:14:45 compute-0 sudo[48215]: pam_unix(sudo:session): session closed for user root
Nov 24 18:14:45 compute-0 python3.9[48367]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:14:46 compute-0 sudo[48519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkyfmhsxbkkjvqcxtsxbzirnmqfoyoim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008085.99144-152-39999328136474/AnsiballZ_dnf.py'
Nov 24 18:14:46 compute-0 sudo[48519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:14:46 compute-0 python3.9[48521]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 18:14:48 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 18:14:48 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 24 18:14:48 compute-0 systemd[1]: Reloading.
Nov 24 18:14:48 compute-0 systemd-rc-local-generator[48559]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:14:48 compute-0 systemd-sysv-generator[48562]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:14:48 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 18:14:48 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 18:14:48 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 24 18:14:48 compute-0 systemd[1]: run-rae6eb5a6ad0a418980af5d303af13673.service: Deactivated successfully.
Nov 24 18:14:48 compute-0 sudo[48519]: pam_unix(sudo:session): session closed for user root
Nov 24 18:14:49 compute-0 sudo[48836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixtkmlbbqctluccmgykdiladtxrxpgbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008089.1429913-160-104718160667286/AnsiballZ_systemd.py'
Nov 24 18:14:49 compute-0 sudo[48836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:14:49 compute-0 python3.9[48838]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 18:14:49 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 24 18:14:49 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Nov 24 18:14:49 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Nov 24 18:14:49 compute-0 NetworkManager[7196]: <info>  [1764008089.7578] caught SIGTERM, shutting down normally.
Nov 24 18:14:49 compute-0 systemd[1]: Stopping Network Manager...
Nov 24 18:14:49 compute-0 NetworkManager[7196]: <info>  [1764008089.7603] dhcp4 (eth0): canceled DHCP transaction
Nov 24 18:14:49 compute-0 NetworkManager[7196]: <info>  [1764008089.7603] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 18:14:49 compute-0 NetworkManager[7196]: <info>  [1764008089.7604] dhcp4 (eth0): state changed no lease
Nov 24 18:14:49 compute-0 NetworkManager[7196]: <info>  [1764008089.7609] manager: NetworkManager state is now CONNECTED_SITE
Nov 24 18:14:49 compute-0 NetworkManager[7196]: <info>  [1764008089.7725] exiting (success)
Nov 24 18:14:49 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 18:14:49 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 18:14:49 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 24 18:14:49 compute-0 systemd[1]: Stopped Network Manager.
Nov 24 18:14:49 compute-0 systemd[1]: NetworkManager.service: Consumed 9.934s CPU time, 4.1M memory peak, read 0B from disk, written 30.0K to disk.
Nov 24 18:14:49 compute-0 systemd[1]: Starting Network Manager...
Nov 24 18:14:49 compute-0 NetworkManager[48851]: <info>  [1764008089.8508] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:c726fd3c-29d8-43c4-9498-0fb31e19789a)
Nov 24 18:14:49 compute-0 NetworkManager[48851]: <info>  [1764008089.8509] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 24 18:14:49 compute-0 NetworkManager[48851]: <info>  [1764008089.8570] manager[0x55e422c56090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 24 18:14:49 compute-0 systemd[1]: Starting Hostname Service...
Nov 24 18:14:49 compute-0 systemd[1]: Started Hostname Service.
Nov 24 18:14:49 compute-0 NetworkManager[48851]: <info>  [1764008089.9694] hostname: hostname: using hostnamed
Nov 24 18:14:49 compute-0 NetworkManager[48851]: <info>  [1764008089.9696] hostname: static hostname changed from (none) to "compute-0"
Nov 24 18:14:49 compute-0 NetworkManager[48851]: <info>  [1764008089.9701] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 24 18:14:49 compute-0 NetworkManager[48851]: <info>  [1764008089.9707] manager[0x55e422c56090]: rfkill: Wi-Fi hardware radio set enabled
Nov 24 18:14:49 compute-0 NetworkManager[48851]: <info>  [1764008089.9707] manager[0x55e422c56090]: rfkill: WWAN hardware radio set enabled
Nov 24 18:14:49 compute-0 NetworkManager[48851]: <info>  [1764008089.9729] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 24 18:14:49 compute-0 NetworkManager[48851]: <info>  [1764008089.9740] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 24 18:14:49 compute-0 NetworkManager[48851]: <info>  [1764008089.9740] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 24 18:14:49 compute-0 NetworkManager[48851]: <info>  [1764008089.9741] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 24 18:14:49 compute-0 NetworkManager[48851]: <info>  [1764008089.9742] manager: Networking is enabled by state file
Nov 24 18:14:49 compute-0 NetworkManager[48851]: <info>  [1764008089.9744] settings: Loaded settings plugin: keyfile (internal)
Nov 24 18:14:49 compute-0 NetworkManager[48851]: <info>  [1764008089.9747] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 24 18:14:49 compute-0 NetworkManager[48851]: <info>  [1764008089.9772] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 24 18:14:49 compute-0 NetworkManager[48851]: <info>  [1764008089.9786] dhcp: init: Using DHCP client 'internal'
Nov 24 18:14:49 compute-0 NetworkManager[48851]: <info>  [1764008089.9789] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 24 18:14:49 compute-0 NetworkManager[48851]: <info>  [1764008089.9795] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 18:14:49 compute-0 NetworkManager[48851]: <info>  [1764008089.9800] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 24 18:14:49 compute-0 NetworkManager[48851]: <info>  [1764008089.9807] device (lo): Activation: starting connection 'lo' (5922deac-6043-4983-8df6-40dbc8abd7af)
Nov 24 18:14:49 compute-0 NetworkManager[48851]: <info>  [1764008089.9812] device (eth0): carrier: link connected
Nov 24 18:14:49 compute-0 NetworkManager[48851]: <info>  [1764008089.9816] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 24 18:14:49 compute-0 NetworkManager[48851]: <info>  [1764008089.9820] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 24 18:14:49 compute-0 NetworkManager[48851]: <info>  [1764008089.9820] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 24 18:14:49 compute-0 NetworkManager[48851]: <info>  [1764008089.9825] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 24 18:14:49 compute-0 NetworkManager[48851]: <info>  [1764008089.9829] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 24 18:14:49 compute-0 NetworkManager[48851]: <info>  [1764008089.9833] device (eth1): carrier: link connected
Nov 24 18:14:49 compute-0 NetworkManager[48851]: <info>  [1764008089.9836] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 24 18:14:49 compute-0 NetworkManager[48851]: <info>  [1764008089.9840] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (730e1bbf-c4c7-52c0-85e9-2379c2b50bf6) (indicated)
Nov 24 18:14:49 compute-0 NetworkManager[48851]: <info>  [1764008089.9841] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 24 18:14:49 compute-0 NetworkManager[48851]: <info>  [1764008089.9844] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 24 18:14:49 compute-0 NetworkManager[48851]: <info>  [1764008089.9848] device (eth1): Activation: starting connection 'ci-private-network' (730e1bbf-c4c7-52c0-85e9-2379c2b50bf6)
Nov 24 18:14:49 compute-0 systemd[1]: Started Network Manager.
Nov 24 18:14:49 compute-0 NetworkManager[48851]: <info>  [1764008089.9855] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 24 18:14:50 compute-0 NetworkManager[48851]: <info>  [1764008090.0322] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 24 18:14:50 compute-0 NetworkManager[48851]: <info>  [1764008090.0329] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 24 18:14:50 compute-0 NetworkManager[48851]: <info>  [1764008090.0333] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 24 18:14:50 compute-0 NetworkManager[48851]: <info>  [1764008090.0338] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 24 18:14:50 compute-0 NetworkManager[48851]: <info>  [1764008090.0344] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 24 18:14:50 compute-0 NetworkManager[48851]: <info>  [1764008090.0350] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 24 18:14:50 compute-0 NetworkManager[48851]: <info>  [1764008090.0356] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 24 18:14:50 compute-0 NetworkManager[48851]: <info>  [1764008090.0375] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 24 18:14:50 compute-0 NetworkManager[48851]: <info>  [1764008090.0387] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 24 18:14:50 compute-0 NetworkManager[48851]: <info>  [1764008090.0392] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 18:14:50 compute-0 NetworkManager[48851]: <info>  [1764008090.0407] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 24 18:14:50 compute-0 systemd[1]: Starting Network Manager Wait Online...
Nov 24 18:14:50 compute-0 NetworkManager[48851]: <info>  [1764008090.0441] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 24 18:14:50 compute-0 NetworkManager[48851]: <info>  [1764008090.0455] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 24 18:14:50 compute-0 NetworkManager[48851]: <info>  [1764008090.0459] dhcp4 (eth0): state changed new lease, address=38.102.83.27
Nov 24 18:14:50 compute-0 NetworkManager[48851]: <info>  [1764008090.0464] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 24 18:14:50 compute-0 NetworkManager[48851]: <info>  [1764008090.0473] device (lo): Activation: successful, device activated.
Nov 24 18:14:50 compute-0 NetworkManager[48851]: <info>  [1764008090.0490] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 24 18:14:50 compute-0 NetworkManager[48851]: <info>  [1764008090.0576] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 24 18:14:50 compute-0 NetworkManager[48851]: <info>  [1764008090.0612] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 24 18:14:50 compute-0 NetworkManager[48851]: <info>  [1764008090.0622] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 24 18:14:50 compute-0 NetworkManager[48851]: <info>  [1764008090.0627] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 24 18:14:50 compute-0 NetworkManager[48851]: <info>  [1764008090.0632] device (eth1): Activation: successful, device activated.
Nov 24 18:14:50 compute-0 NetworkManager[48851]: <info>  [1764008090.0673] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 24 18:14:50 compute-0 NetworkManager[48851]: <info>  [1764008090.0676] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 24 18:14:50 compute-0 sudo[48836]: pam_unix(sudo:session): session closed for user root
Nov 24 18:14:50 compute-0 NetworkManager[48851]: <info>  [1764008090.0683] manager: NetworkManager state is now CONNECTED_SITE
Nov 24 18:14:50 compute-0 NetworkManager[48851]: <info>  [1764008090.0689] device (eth0): Activation: successful, device activated.
Nov 24 18:14:50 compute-0 NetworkManager[48851]: <info>  [1764008090.0696] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 24 18:14:50 compute-0 NetworkManager[48851]: <info>  [1764008090.0730] manager: startup complete
Nov 24 18:14:50 compute-0 systemd[1]: Finished Network Manager Wait Online.
Nov 24 18:14:50 compute-0 sudo[49062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwdrdjzviwuirwfatgivfkazjtbctjkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008090.226695-168-171795933044206/AnsiballZ_dnf.py'
Nov 24 18:14:50 compute-0 sudo[49062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:14:50 compute-0 python3.9[49064]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 18:14:54 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 18:14:54 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 24 18:14:54 compute-0 systemd[1]: Reloading.
Nov 24 18:14:55 compute-0 systemd-sysv-generator[49113]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:14:55 compute-0 systemd-rc-local-generator[49109]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:14:55 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 18:14:55 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 18:14:55 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 24 18:14:55 compute-0 systemd[1]: run-r5a920b9428bd4ecdbe852baff3dac7b9.service: Deactivated successfully.
Nov 24 18:14:55 compute-0 sudo[49062]: pam_unix(sudo:session): session closed for user root
Nov 24 18:14:56 compute-0 sudo[49520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkivtesffyuruvuftdcxbrvlllpdcyzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008096.2271976-180-214183578124412/AnsiballZ_stat.py'
Nov 24 18:14:56 compute-0 sudo[49520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:14:56 compute-0 python3.9[49522]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:14:56 compute-0 sudo[49520]: pam_unix(sudo:session): session closed for user root
Nov 24 18:14:57 compute-0 sudo[49672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlfjwlqprxuljcssosmypzhzyaxpyclv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008096.8580627-189-244438393663974/AnsiballZ_ini_file.py'
Nov 24 18:14:57 compute-0 sudo[49672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:14:57 compute-0 python3.9[49674]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:14:57 compute-0 sudo[49672]: pam_unix(sudo:session): session closed for user root
Nov 24 18:14:58 compute-0 sudo[49826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avasjrxgahpoabqzyxecisllgpekcgqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008097.849802-199-207818858934725/AnsiballZ_ini_file.py'
Nov 24 18:14:58 compute-0 sudo[49826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:14:58 compute-0 python3.9[49828]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:14:58 compute-0 sudo[49826]: pam_unix(sudo:session): session closed for user root
Nov 24 18:14:58 compute-0 sudo[49978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwnkpklalptkitrpoiufdjoihxykgoog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008098.3895268-199-108421018587400/AnsiballZ_ini_file.py'
Nov 24 18:14:58 compute-0 sudo[49978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:14:58 compute-0 python3.9[49980]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:14:58 compute-0 sudo[49978]: pam_unix(sudo:session): session closed for user root
Nov 24 18:14:59 compute-0 sudo[50130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dubujbyrcolbfvghsmdwimpsgdmctlgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008098.9898193-214-271575960701887/AnsiballZ_ini_file.py'
Nov 24 18:14:59 compute-0 sudo[50130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:14:59 compute-0 python3.9[50132]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:14:59 compute-0 sudo[50130]: pam_unix(sudo:session): session closed for user root
Nov 24 18:14:59 compute-0 sudo[50282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekncumosdwyvnbwqdyziznmxbsirfxek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008099.5607216-214-195907221478903/AnsiballZ_ini_file.py'
Nov 24 18:14:59 compute-0 sudo[50282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:14:59 compute-0 python3.9[50284]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:14:59 compute-0 sudo[50282]: pam_unix(sudo:session): session closed for user root
Nov 24 18:15:00 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 18:15:00 compute-0 sudo[50434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyjvgkkurstgznpdcmsienmbkfmlhgnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008100.156014-229-101920566833729/AnsiballZ_stat.py'
Nov 24 18:15:00 compute-0 sudo[50434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:15:00 compute-0 python3.9[50436]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:15:00 compute-0 sudo[50434]: pam_unix(sudo:session): session closed for user root
Nov 24 18:15:01 compute-0 sudo[50557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sopviaovvdojyijgyvktwxwgnzbcijhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008100.156014-229-101920566833729/AnsiballZ_copy.py'
Nov 24 18:15:01 compute-0 sudo[50557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:15:01 compute-0 python3.9[50559]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764008100.156014-229-101920566833729/.source _original_basename=.6ce8bbu4 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:15:01 compute-0 sudo[50557]: pam_unix(sudo:session): session closed for user root
Nov 24 18:15:01 compute-0 sudo[50709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvbtxgigbvvmdigjsfobodfzawwajuib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008101.4367435-244-193921151359598/AnsiballZ_file.py'
Nov 24 18:15:01 compute-0 sudo[50709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:15:01 compute-0 python3.9[50711]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:15:01 compute-0 sudo[50709]: pam_unix(sudo:session): session closed for user root
Nov 24 18:15:02 compute-0 sudo[50861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbysyqwpstdkoxphiniqhktfhxyrexse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008102.0820851-252-183573198561704/AnsiballZ_edpm_os_net_config_mappings.py'
Nov 24 18:15:02 compute-0 sudo[50861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:15:02 compute-0 python3.9[50863]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 24 18:15:02 compute-0 sudo[50861]: pam_unix(sudo:session): session closed for user root
Nov 24 18:15:03 compute-0 sudo[51013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbxotowiricfgxenygckbbnzhxeoidsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008102.8958037-261-215549313835056/AnsiballZ_file.py'
Nov 24 18:15:03 compute-0 sudo[51013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:15:03 compute-0 python3.9[51015]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:15:03 compute-0 sudo[51013]: pam_unix(sudo:session): session closed for user root
Nov 24 18:15:03 compute-0 sudo[51165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zckeyekqbdvxqvlzmcjulhytsdthtwqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008103.6289904-271-281301089958597/AnsiballZ_stat.py'
Nov 24 18:15:03 compute-0 sudo[51165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:15:04 compute-0 sudo[51165]: pam_unix(sudo:session): session closed for user root
Nov 24 18:15:04 compute-0 sudo[51288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adkmfgwkueabafihympgtpgmwlghxyfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008103.6289904-271-281301089958597/AnsiballZ_copy.py'
Nov 24 18:15:04 compute-0 sudo[51288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:15:04 compute-0 sudo[51288]: pam_unix(sudo:session): session closed for user root
Nov 24 18:15:05 compute-0 sudo[51440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szlswnpqiotcckqkdymrashpgnlgyldd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008104.6978028-286-228071612505979/AnsiballZ_slurp.py'
Nov 24 18:15:05 compute-0 sudo[51440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:15:05 compute-0 python3.9[51442]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 24 18:15:05 compute-0 sudo[51440]: pam_unix(sudo:session): session closed for user root
Nov 24 18:15:06 compute-0 sudo[51615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtymmgjrhinarvtkzwnlokelcshjsmsp ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008105.4671092-295-128670388362531/async_wrapper.py j512682444113 300 /home/zuul/.ansible/tmp/ansible-tmp-1764008105.4671092-295-128670388362531/AnsiballZ_edpm_os_net_config.py _'
Nov 24 18:15:06 compute-0 sudo[51615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:15:06 compute-0 ansible-async_wrapper.py[51617]: Invoked with j512682444113 300 /home/zuul/.ansible/tmp/ansible-tmp-1764008105.4671092-295-128670388362531/AnsiballZ_edpm_os_net_config.py _
Nov 24 18:15:06 compute-0 ansible-async_wrapper.py[51620]: Starting module and watcher
Nov 24 18:15:06 compute-0 ansible-async_wrapper.py[51620]: Start watching 51621 (300)
Nov 24 18:15:06 compute-0 ansible-async_wrapper.py[51621]: Start module (51621)
Nov 24 18:15:06 compute-0 ansible-async_wrapper.py[51617]: Return async_wrapper task started.
Nov 24 18:15:06 compute-0 sudo[51615]: pam_unix(sudo:session): session closed for user root
Nov 24 18:15:06 compute-0 python3.9[51622]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 24 18:15:07 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 24 18:15:07 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 24 18:15:07 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 24 18:15:07 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 24 18:15:07 compute-0 kernel: cfg80211: failed to load regulatory.db
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.1892] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51623 uid=0 result="success"
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.1905] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51623 uid=0 result="success"
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2360] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2361] audit: op="connection-add" uuid="285580ad-f048-411d-8e01-fe54e62f2276" name="br-ex-br" pid=51623 uid=0 result="success"
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2374] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2375] audit: op="connection-add" uuid="c11266a8-6fc0-4266-85b5-dcae315789b7" name="br-ex-port" pid=51623 uid=0 result="success"
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2385] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2386] audit: op="connection-add" uuid="86720774-b112-455d-806c-5f7854e8dc94" name="eth1-port" pid=51623 uid=0 result="success"
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2395] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2396] audit: op="connection-add" uuid="df4b830c-9e1a-47b8-baf9-c21721e8e040" name="vlan20-port" pid=51623 uid=0 result="success"
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2405] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2406] audit: op="connection-add" uuid="da5153fc-4229-4827-94ac-6e1ee8a20568" name="vlan21-port" pid=51623 uid=0 result="success"
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2415] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2416] audit: op="connection-add" uuid="4abca62d-4175-45c3-a1b7-18a7b1fadbb9" name="vlan22-port" pid=51623 uid=0 result="success"
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2425] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2426] audit: op="connection-add" uuid="d61e71f7-e4b3-4117-b21e-87d15b0a9b91" name="vlan23-port" pid=51623 uid=0 result="success"
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2442] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.timestamp,connection.autoconnect-priority,ipv6.addr-gen-mode,ipv6.method,ipv6.dhcp-timeout,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu" pid=51623 uid=0 result="success"
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2455] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2456] audit: op="connection-add" uuid="b3bc3db1-b67b-4fd3-8d15-af197881bb15" name="br-ex-if" pid=51623 uid=0 result="success"
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2495] audit: op="connection-update" uuid="730e1bbf-c4c7-52c0-85e9-2379c2b50bf6" name="ci-private-network" args="connection.timestamp,connection.slave-type,connection.master,connection.port-type,connection.controller,ipv6.routes,ipv6.addr-gen-mode,ipv6.method,ipv6.dns,ipv6.addresses,ipv6.routing-rules,ipv4.never-default,ipv4.method,ipv4.routes,ipv4.dns,ipv4.addresses,ipv4.routing-rules,ovs-interface.type,ovs-external-ids.data" pid=51623 uid=0 result="success"
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2508] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2509] audit: op="connection-add" uuid="e302bbdb-383e-4265-9522-035305242aca" name="vlan20-if" pid=51623 uid=0 result="success"
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2522] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2523] audit: op="connection-add" uuid="670ed63b-73be-486b-b4bf-95961f23ffe4" name="vlan21-if" pid=51623 uid=0 result="success"
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2535] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2537] audit: op="connection-add" uuid="1cdae8a7-f917-4bae-ab10-c6c13f970a21" name="vlan22-if" pid=51623 uid=0 result="success"
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2549] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2550] audit: op="connection-add" uuid="bb95f981-dffe-46e6-bbd1-952e1af482b5" name="vlan23-if" pid=51623 uid=0 result="success"
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2559] audit: op="connection-delete" uuid="3cf5caf6-dae0-3e12-91e8-cbb71d516e93" name="Wired connection 1" pid=51623 uid=0 result="success"
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2568] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2575] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2578] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (285580ad-f048-411d-8e01-fe54e62f2276)
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2578] audit: op="connection-activate" uuid="285580ad-f048-411d-8e01-fe54e62f2276" name="br-ex-br" pid=51623 uid=0 result="success"
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2580] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2585] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2587] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (c11266a8-6fc0-4266-85b5-dcae315789b7)
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2588] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2592] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2594] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (86720774-b112-455d-806c-5f7854e8dc94)
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2596] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2600] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2602] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (df4b830c-9e1a-47b8-baf9-c21721e8e040)
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2604] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2608] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2612] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (da5153fc-4229-4827-94ac-6e1ee8a20568)
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2613] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2617] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2619] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (4abca62d-4175-45c3-a1b7-18a7b1fadbb9)
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2620] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2625] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2627] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (d61e71f7-e4b3-4117-b21e-87d15b0a9b91)
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2628] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2630] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2631] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2635] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2638] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2641] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (b3bc3db1-b67b-4fd3-8d15-af197881bb15)
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2641] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2643] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2644] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2645] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2646] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2653] device (eth1): disconnecting for new activation request.
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2653] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2655] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2656] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2657] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2659] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2661] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2664] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (e302bbdb-383e-4265-9522-035305242aca)
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2665] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2666] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2667] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2668] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2670] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2672] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2675] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (670ed63b-73be-486b-b4bf-95961f23ffe4)
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2676] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2677] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2678] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2679] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2681] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2683] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2686] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (1cdae8a7-f917-4bae-ab10-c6c13f970a21)
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2686] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2688] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2689] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2690] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2692] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2696] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2700] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (bb95f981-dffe-46e6-bbd1-952e1af482b5)
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2701] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2704] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2706] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2707] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2709] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2721] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,ipv6.addr-gen-mode,ipv6.method,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu" pid=51623 uid=0 result="success"
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2724] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2727] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2729] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2735] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2741] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2746] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2750] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2752] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2758] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 kernel: ovs-system: entered promiscuous mode
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2764] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2768] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2770] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2776] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 kernel: Timeout policy base is empty
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2782] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2786] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2788] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2793] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 systemd-udevd[51629]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2798] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2802] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2803] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2808] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2813] dhcp4 (eth0): canceled DHCP transaction
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2813] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2813] dhcp4 (eth0): state changed no lease
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2815] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2825] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2829] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51623 uid=0 result="fail" reason="Device is not activated"
Nov 24 18:15:08 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2917] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2920] dhcp4 (eth0): state changed new lease, address=38.102.83.27
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2928] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.2983] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 24 18:15:08 compute-0 kernel: br-ex: entered promiscuous mode
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3154] device (eth1): Activation: starting connection 'ci-private-network' (730e1bbf-c4c7-52c0-85e9-2379c2b50bf6)
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3164] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3169] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3185] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Nov 24 18:15:08 compute-0 systemd-udevd[51628]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 18:15:08 compute-0 kernel: vlan22: entered promiscuous mode
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3191] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3194] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3203] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3214] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3226] device (eth1): state change: config -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3230] device (eth1): released from controller device eth1
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3237] device (eth1): disconnecting for new activation request.
Nov 24 18:15:08 compute-0 kernel: vlan20: entered promiscuous mode
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3237] audit: op="connection-activate" uuid="730e1bbf-c4c7-52c0-85e9-2379c2b50bf6" name="ci-private-network" pid=51623 uid=0 result="success"
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3238] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3239] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3240] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3241] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3242] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3243] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 systemd-udevd[51627]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3249] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3253] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3258] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3266] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3272] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3277] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3281] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3287] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3294] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 kernel: vlan21: entered promiscuous mode
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3299] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3311] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3317] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3356] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3357] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51623 uid=0 result="success"
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3358] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3365] device (eth1): Activation: starting connection 'ci-private-network' (730e1bbf-c4c7-52c0-85e9-2379c2b50bf6)
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3376] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3393] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3397] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 kernel: vlan23: entered promiscuous mode
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3425] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 24 18:15:08 compute-0 systemd-udevd[51729]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3435] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3447] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3456] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3469] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3475] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3488] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3501] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3509] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3565] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3566] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3568] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3569] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3570] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3575] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3582] device (eth1): Activation: successful, device activated.
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3587] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3593] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3599] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3606] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3612] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3617] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3623] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3630] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3642] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3655] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3712] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3713] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 18:15:08 compute-0 NetworkManager[48851]: <info>  [1764008108.3718] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 24 18:15:09 compute-0 NetworkManager[48851]: <info>  [1764008109.5298] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51623 uid=0 result="success"
Nov 24 18:15:09 compute-0 NetworkManager[48851]: <info>  [1764008109.7217] checkpoint[0x55e422c2c950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 24 18:15:09 compute-0 NetworkManager[48851]: <info>  [1764008109.7221] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51623 uid=0 result="success"
Nov 24 18:15:09 compute-0 sudo[51985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yniqksuxunskpgbfkpzltceaamsihtiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008109.4647753-295-268120018786795/AnsiballZ_async_status.py'
Nov 24 18:15:09 compute-0 sudo[51985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:15:10 compute-0 NetworkManager[48851]: <info>  [1764008110.0191] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51623 uid=0 result="success"
Nov 24 18:15:10 compute-0 NetworkManager[48851]: <info>  [1764008110.0202] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51623 uid=0 result="success"
Nov 24 18:15:10 compute-0 python3.9[51988]: ansible-ansible.legacy.async_status Invoked with jid=j512682444113.51617 mode=status _async_dir=/root/.ansible_async
Nov 24 18:15:10 compute-0 sudo[51985]: pam_unix(sudo:session): session closed for user root
Nov 24 18:15:10 compute-0 NetworkManager[48851]: <info>  [1764008110.2308] audit: op="networking-control" arg="global-dns-configuration" pid=51623 uid=0 result="success"
Nov 24 18:15:10 compute-0 NetworkManager[48851]: <info>  [1764008110.2372] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Nov 24 18:15:10 compute-0 NetworkManager[48851]: <info>  [1764008110.2451] audit: op="networking-control" arg="global-dns-configuration" pid=51623 uid=0 result="success"
Nov 24 18:15:10 compute-0 NetworkManager[48851]: <info>  [1764008110.2472] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51623 uid=0 result="success"
Nov 24 18:15:10 compute-0 NetworkManager[48851]: <info>  [1764008110.3838] checkpoint[0x55e422c2ca20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 24 18:15:10 compute-0 NetworkManager[48851]: <info>  [1764008110.3843] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51623 uid=0 result="success"
Nov 24 18:15:10 compute-0 ansible-async_wrapper.py[51621]: Module complete (51621)
Nov 24 18:15:11 compute-0 ansible-async_wrapper.py[51620]: Done in kid B.
Nov 24 18:15:13 compute-0 sudo[52090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkrnulvsedfqlahaleupiefywdlgvuom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008109.4647753-295-268120018786795/AnsiballZ_async_status.py'
Nov 24 18:15:13 compute-0 sudo[52090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:15:13 compute-0 python3.9[52092]: ansible-ansible.legacy.async_status Invoked with jid=j512682444113.51617 mode=status _async_dir=/root/.ansible_async
Nov 24 18:15:13 compute-0 sudo[52090]: pam_unix(sudo:session): session closed for user root
Nov 24 18:15:13 compute-0 sudo[52190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuxgobqtsvvoanadpqtsuozcmqtisqzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008109.4647753-295-268120018786795/AnsiballZ_async_status.py'
Nov 24 18:15:13 compute-0 sudo[52190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:15:14 compute-0 python3.9[52192]: ansible-ansible.legacy.async_status Invoked with jid=j512682444113.51617 mode=cleanup _async_dir=/root/.ansible_async
Nov 24 18:15:14 compute-0 sudo[52190]: pam_unix(sudo:session): session closed for user root
Nov 24 18:15:14 compute-0 sudo[52342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usbwvuaflpdsbrvuvjzwvgcltlrfbksp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008114.2995408-322-84221145183548/AnsiballZ_stat.py'
Nov 24 18:15:14 compute-0 sudo[52342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:15:14 compute-0 python3.9[52344]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:15:14 compute-0 sudo[52342]: pam_unix(sudo:session): session closed for user root
Nov 24 18:15:15 compute-0 sudo[52465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qloubgbryqwbsohaymdzmzmnxmvkcjgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008114.2995408-322-84221145183548/AnsiballZ_copy.py'
Nov 24 18:15:15 compute-0 sudo[52465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:15:15 compute-0 python3.9[52467]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764008114.2995408-322-84221145183548/.source.returncode _original_basename=.sz_ajvkl follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:15:15 compute-0 sudo[52465]: pam_unix(sudo:session): session closed for user root
Nov 24 18:15:15 compute-0 sudo[52617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiabyomgwiioxtyesbobhcqnhuockrcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008115.589591-338-99110265194168/AnsiballZ_stat.py'
Nov 24 18:15:15 compute-0 sudo[52617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:15:16 compute-0 python3.9[52619]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:15:16 compute-0 sudo[52617]: pam_unix(sudo:session): session closed for user root
Nov 24 18:15:16 compute-0 sudo[52740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmdqmgevsozzndgkuxssjyjpvhqepfgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008115.589591-338-99110265194168/AnsiballZ_copy.py'
Nov 24 18:15:16 compute-0 sudo[52740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:15:16 compute-0 python3.9[52742]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764008115.589591-338-99110265194168/.source.cfg _original_basename=.lgqr30m9 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:15:16 compute-0 sudo[52740]: pam_unix(sudo:session): session closed for user root
Nov 24 18:15:17 compute-0 sudo[52893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugcrtkgqdcooutfhponradhcviliwwso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008116.77245-353-12759368769376/AnsiballZ_systemd.py'
Nov 24 18:15:17 compute-0 sudo[52893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:15:17 compute-0 python3.9[52895]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 18:15:17 compute-0 systemd[1]: Reloading Network Manager...
Nov 24 18:15:17 compute-0 NetworkManager[48851]: <info>  [1764008117.3540] audit: op="reload" arg="0" pid=52899 uid=0 result="success"
Nov 24 18:15:17 compute-0 NetworkManager[48851]: <info>  [1764008117.3545] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 24 18:15:17 compute-0 systemd[1]: Reloaded Network Manager.
Nov 24 18:15:17 compute-0 sudo[52893]: pam_unix(sudo:session): session closed for user root
Nov 24 18:15:17 compute-0 sshd-session[44855]: Connection closed by 192.168.122.30 port 50300
Nov 24 18:15:17 compute-0 sshd-session[44852]: pam_unix(sshd:session): session closed for user zuul
Nov 24 18:15:17 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Nov 24 18:15:17 compute-0 systemd[1]: session-9.scope: Consumed 47.179s CPU time.
Nov 24 18:15:17 compute-0 systemd-logind[822]: Session 9 logged out. Waiting for processes to exit.
Nov 24 18:15:17 compute-0 systemd-logind[822]: Removed session 9.
Nov 24 18:15:19 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 24 18:15:24 compute-0 sshd-session[52932]: Accepted publickey for zuul from 192.168.122.30 port 59920 ssh2: ECDSA SHA256:jrbRhTO0vQ5011EeK1ZbrK2vK+fcQyIDL9kUBqDHBYY
Nov 24 18:15:24 compute-0 systemd-logind[822]: New session 10 of user zuul.
Nov 24 18:15:24 compute-0 systemd[1]: Started Session 10 of User zuul.
Nov 24 18:15:24 compute-0 sshd-session[52932]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 18:15:25 compute-0 python3.9[53085]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:15:26 compute-0 python3.9[53239]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 18:15:27 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 18:15:27 compute-0 python3.9[53433]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:15:27 compute-0 sshd-session[52935]: Connection closed by 192.168.122.30 port 59920
Nov 24 18:15:27 compute-0 sshd-session[52932]: pam_unix(sshd:session): session closed for user zuul
Nov 24 18:15:27 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Nov 24 18:15:27 compute-0 systemd[1]: session-10.scope: Consumed 2.430s CPU time.
Nov 24 18:15:27 compute-0 systemd-logind[822]: Session 10 logged out. Waiting for processes to exit.
Nov 24 18:15:27 compute-0 systemd-logind[822]: Removed session 10.
Nov 24 18:15:33 compute-0 sshd-session[53462]: Accepted publickey for zuul from 192.168.122.30 port 52724 ssh2: ECDSA SHA256:jrbRhTO0vQ5011EeK1ZbrK2vK+fcQyIDL9kUBqDHBYY
Nov 24 18:15:33 compute-0 systemd-logind[822]: New session 11 of user zuul.
Nov 24 18:15:33 compute-0 systemd[1]: Started Session 11 of User zuul.
Nov 24 18:15:33 compute-0 sshd-session[53462]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 18:15:34 compute-0 python3.9[53615]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:15:35 compute-0 python3.9[53769]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:15:36 compute-0 sudo[53924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnjzewxxxizpcgucjhjhdpqnujoeduqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008136.1385708-40-143251778294846/AnsiballZ_setup.py'
Nov 24 18:15:36 compute-0 sudo[53924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:15:36 compute-0 python3.9[53926]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 18:15:36 compute-0 sudo[53924]: pam_unix(sudo:session): session closed for user root
Nov 24 18:15:37 compute-0 sudo[54008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjqjjlylosltmildjnveledsgpalczjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008136.1385708-40-143251778294846/AnsiballZ_dnf.py'
Nov 24 18:15:37 compute-0 sudo[54008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:15:37 compute-0 python3.9[54010]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 18:15:38 compute-0 sudo[54008]: pam_unix(sudo:session): session closed for user root
Nov 24 18:15:39 compute-0 sudo[54162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjndroslozmlefdvispsjltyhcpdhkrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008139.0990193-52-275379154488808/AnsiballZ_setup.py'
Nov 24 18:15:39 compute-0 sudo[54162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:15:39 compute-0 python3.9[54164]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 18:15:39 compute-0 sudo[54162]: pam_unix(sudo:session): session closed for user root
Nov 24 18:15:40 compute-0 sudo[54357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbkbdckxckphdlbzkflzonfdrwkxuiub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008140.0732353-63-164804682466665/AnsiballZ_file.py'
Nov 24 18:15:40 compute-0 sudo[54357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:15:40 compute-0 python3.9[54359]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:15:40 compute-0 sudo[54357]: pam_unix(sudo:session): session closed for user root
Nov 24 18:15:41 compute-0 sudo[54509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqwnquddbzdmiitggprxdarfpmpikmbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008140.813195-71-96476771542766/AnsiballZ_command.py'
Nov 24 18:15:41 compute-0 sudo[54509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:15:41 compute-0 python3.9[54511]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:15:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat3724688552-merged.mount: Deactivated successfully.
Nov 24 18:15:41 compute-0 podman[54512]: 2025-11-24 18:15:41.510039778 +0000 UTC m=+0.043755514 system refresh
Nov 24 18:15:41 compute-0 sudo[54509]: pam_unix(sudo:session): session closed for user root
Nov 24 18:15:42 compute-0 sudo[54672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqrjpkfdquhjspgtraiffokjsihgjavq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008141.7001703-79-250137618827268/AnsiballZ_stat.py'
Nov 24 18:15:42 compute-0 sudo[54672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:15:42 compute-0 python3.9[54674]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:15:42 compute-0 sudo[54672]: pam_unix(sudo:session): session closed for user root
Nov 24 18:15:42 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 18:15:42 compute-0 sudo[54795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bynklcignbmwaoktretxynsacdcfidoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008141.7001703-79-250137618827268/AnsiballZ_copy.py'
Nov 24 18:15:42 compute-0 sudo[54795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:15:42 compute-0 python3.9[54797]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764008141.7001703-79-250137618827268/.source.json follow=False _original_basename=podman_network_config.j2 checksum=d67b1c249ab97334a6ce0bba856dd73ecc527dd8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:15:43 compute-0 sudo[54795]: pam_unix(sudo:session): session closed for user root
Nov 24 18:15:43 compute-0 sudo[54947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tprsfwoyypphiklmhevveuxyqdtcvmhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008143.1383765-94-269533408929699/AnsiballZ_stat.py'
Nov 24 18:15:43 compute-0 sudo[54947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:15:43 compute-0 python3.9[54949]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:15:43 compute-0 sudo[54947]: pam_unix(sudo:session): session closed for user root
Nov 24 18:15:43 compute-0 sudo[55070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpjjplomdzwxbkxqciwtghcgcosrslvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008143.1383765-94-269533408929699/AnsiballZ_copy.py'
Nov 24 18:15:43 compute-0 sudo[55070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:15:44 compute-0 python3.9[55072]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764008143.1383765-94-269533408929699/.source.conf follow=False _original_basename=registries.conf.j2 checksum=97513ee69a4b3dc3c4fd06acbbcaa9a991e77aee backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:15:44 compute-0 sudo[55070]: pam_unix(sudo:session): session closed for user root
Nov 24 18:15:44 compute-0 sudo[55222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-envokopanqsjhdwsfmfzatvzlkolzvzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008144.270952-110-47201973445289/AnsiballZ_ini_file.py'
Nov 24 18:15:44 compute-0 sudo[55222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:15:44 compute-0 python3.9[55224]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:15:44 compute-0 sudo[55222]: pam_unix(sudo:session): session closed for user root
Nov 24 18:15:45 compute-0 sudo[55374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbirhiahbardkvigigagqfvxwupsqojv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008144.9842613-110-86911230795866/AnsiballZ_ini_file.py'
Nov 24 18:15:45 compute-0 sudo[55374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:15:45 compute-0 python3.9[55376]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:15:45 compute-0 sudo[55374]: pam_unix(sudo:session): session closed for user root
Nov 24 18:15:45 compute-0 sudo[55526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dosqouzyzfmrcmrmbvqbledfhchyvvel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008145.4968724-110-78969377789824/AnsiballZ_ini_file.py'
Nov 24 18:15:45 compute-0 sudo[55526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:15:45 compute-0 python3.9[55528]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:15:45 compute-0 sudo[55526]: pam_unix(sudo:session): session closed for user root
Nov 24 18:15:46 compute-0 sudo[55678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiolmowxthdbybkkqwoenemcaathdirh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008146.017562-110-157865478224382/AnsiballZ_ini_file.py'
Nov 24 18:15:46 compute-0 sudo[55678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:15:46 compute-0 python3.9[55680]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:15:46 compute-0 sudo[55678]: pam_unix(sudo:session): session closed for user root
Nov 24 18:15:46 compute-0 sudo[55830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epwxxcfetyurptdslzinjjbynrgzngop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008146.677433-141-201751675075921/AnsiballZ_dnf.py'
Nov 24 18:15:46 compute-0 sudo[55830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:15:47 compute-0 python3.9[55832]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 18:15:48 compute-0 sudo[55830]: pam_unix(sudo:session): session closed for user root
Nov 24 18:15:49 compute-0 sudo[55983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxbwqqtfsietvinlgonmmxmarqoxovuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008148.7171946-152-105191970031355/AnsiballZ_setup.py'
Nov 24 18:15:49 compute-0 sudo[55983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:15:49 compute-0 python3.9[55985]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:15:49 compute-0 sudo[55983]: pam_unix(sudo:session): session closed for user root
Nov 24 18:15:49 compute-0 sudo[56137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqgaswcvgcntxkjjohfeandikilawrfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008149.4982402-160-170149547695617/AnsiballZ_stat.py'
Nov 24 18:15:49 compute-0 sudo[56137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:15:49 compute-0 python3.9[56139]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:15:50 compute-0 sudo[56137]: pam_unix(sudo:session): session closed for user root
Nov 24 18:15:50 compute-0 sudo[56289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipyxinqdnqftemfovbqcbnyzewxlbgfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008150.222678-169-37428377730359/AnsiballZ_stat.py'
Nov 24 18:15:50 compute-0 sudo[56289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:15:50 compute-0 python3.9[56291]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:15:50 compute-0 sudo[56289]: pam_unix(sudo:session): session closed for user root
Nov 24 18:15:51 compute-0 sudo[56441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvwxwqbdnrdwukepcvvddfxdmhnqsosb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008150.9705813-179-22635880214983/AnsiballZ_command.py'
Nov 24 18:15:51 compute-0 sudo[56441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:15:51 compute-0 python3.9[56443]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:15:51 compute-0 sudo[56441]: pam_unix(sudo:session): session closed for user root
Nov 24 18:15:52 compute-0 sudo[56594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sagrybutqurqhvtiyjxhetrthkdasogb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008151.675627-189-68494160469050/AnsiballZ_service_facts.py'
Nov 24 18:15:52 compute-0 sudo[56594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:15:52 compute-0 python3.9[56596]: ansible-service_facts Invoked
Nov 24 18:15:52 compute-0 network[56613]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 18:15:52 compute-0 network[56614]: 'network-scripts' will be removed from distribution in near future.
Nov 24 18:15:52 compute-0 network[56615]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 18:15:55 compute-0 sudo[56594]: pam_unix(sudo:session): session closed for user root
Nov 24 18:15:55 compute-0 sudo[56898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdvbxuathbvdfynsaemwdfdcoujhjqda ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1764008155.6496723-204-13785350467817/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1764008155.6496723-204-13785350467817/args'
Nov 24 18:15:55 compute-0 sudo[56898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:15:55 compute-0 sudo[56898]: pam_unix(sudo:session): session closed for user root
Nov 24 18:15:56 compute-0 sudo[57065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uihppvdliotpbxwzvqfauukycjaxdunt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008156.19937-215-163922588773162/AnsiballZ_dnf.py'
Nov 24 18:15:56 compute-0 sudo[57065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:15:56 compute-0 python3.9[57067]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 18:15:59 compute-0 sudo[57065]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:00 compute-0 sudo[57219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlefdcksaldoqcfcgippuzqkbfovumdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008159.739372-228-12694461958233/AnsiballZ_package_facts.py'
Nov 24 18:16:00 compute-0 sudo[57219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:00 compute-0 python3.9[57221]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 24 18:16:00 compute-0 sudo[57219]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:01 compute-0 sudo[57371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyluqzvvrmirwesuivsevyrgjlutectm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008161.2220657-238-220742480729490/AnsiballZ_stat.py'
Nov 24 18:16:01 compute-0 sudo[57371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:01 compute-0 python3.9[57373]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:16:01 compute-0 sudo[57371]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:02 compute-0 sudo[57496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkjvxchgsabbyfudaikhpboeqirtrrjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008161.2220657-238-220742480729490/AnsiballZ_copy.py'
Nov 24 18:16:02 compute-0 sudo[57496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:02 compute-0 python3.9[57498]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764008161.2220657-238-220742480729490/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:16:02 compute-0 sudo[57496]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:02 compute-0 sudo[57650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-girswuatkubvsnpgjnpisghucyaptnpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008162.4424393-253-139876111031936/AnsiballZ_stat.py'
Nov 24 18:16:02 compute-0 sudo[57650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:02 compute-0 python3.9[57652]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:16:02 compute-0 sudo[57650]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:03 compute-0 sudo[57775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whimmnsmwlvuojhlezavjndxepgpofmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008162.4424393-253-139876111031936/AnsiballZ_copy.py'
Nov 24 18:16:03 compute-0 sudo[57775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:03 compute-0 python3.9[57777]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764008162.4424393-253-139876111031936/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:16:03 compute-0 sudo[57775]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:04 compute-0 sudo[57929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swappmgrotxvpqjrtmmhqltcqsiiyanu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008163.7873938-274-1426108390727/AnsiballZ_lineinfile.py'
Nov 24 18:16:04 compute-0 sudo[57929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:04 compute-0 python3.9[57931]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:16:04 compute-0 sudo[57929]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:05 compute-0 sudo[58083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgnkjwzkvuuwuafcfcwzevogrjxxfjrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008164.918055-289-39535675173676/AnsiballZ_setup.py'
Nov 24 18:16:05 compute-0 sudo[58083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:05 compute-0 python3.9[58085]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 18:16:05 compute-0 sudo[58083]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:06 compute-0 sudo[58167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbwktdvarxvgtcdghfcwpfyngulriafs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008164.918055-289-39535675173676/AnsiballZ_systemd.py'
Nov 24 18:16:06 compute-0 sudo[58167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:06 compute-0 python3.9[58169]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:16:06 compute-0 sudo[58167]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:07 compute-0 sudo[58321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyvbqszygvisinurztgbsceprkyflrnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008167.1481578-305-109569053259141/AnsiballZ_setup.py'
Nov 24 18:16:07 compute-0 sudo[58321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:07 compute-0 python3.9[58323]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 18:16:07 compute-0 sudo[58321]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:08 compute-0 sudo[58405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yifvapbeilxyktkrekuszkumjfrejzjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008167.1481578-305-109569053259141/AnsiballZ_systemd.py'
Nov 24 18:16:08 compute-0 sudo[58405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:08 compute-0 python3.9[58407]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 18:16:08 compute-0 chronyd[831]: chronyd exiting
Nov 24 18:16:08 compute-0 systemd[1]: Stopping NTP client/server...
Nov 24 18:16:08 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Nov 24 18:16:08 compute-0 systemd[1]: Stopped NTP client/server.
Nov 24 18:16:08 compute-0 systemd[1]: Starting NTP client/server...
Nov 24 18:16:08 compute-0 chronyd[58415]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 24 18:16:08 compute-0 chronyd[58415]: Frequency -24.803 +/- 0.135 ppm read from /var/lib/chrony/drift
Nov 24 18:16:08 compute-0 chronyd[58415]: Loaded seccomp filter (level 2)
Nov 24 18:16:08 compute-0 systemd[1]: Started NTP client/server.
Nov 24 18:16:08 compute-0 sudo[58405]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:08 compute-0 sshd-session[53465]: Connection closed by 192.168.122.30 port 52724
Nov 24 18:16:08 compute-0 sshd-session[53462]: pam_unix(sshd:session): session closed for user zuul
Nov 24 18:16:08 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Nov 24 18:16:08 compute-0 systemd[1]: session-11.scope: Consumed 24.043s CPU time.
Nov 24 18:16:08 compute-0 systemd-logind[822]: Session 11 logged out. Waiting for processes to exit.
Nov 24 18:16:08 compute-0 systemd-logind[822]: Removed session 11.
Nov 24 18:16:14 compute-0 sshd-session[58441]: Accepted publickey for zuul from 192.168.122.30 port 37010 ssh2: ECDSA SHA256:jrbRhTO0vQ5011EeK1ZbrK2vK+fcQyIDL9kUBqDHBYY
Nov 24 18:16:14 compute-0 systemd-logind[822]: New session 12 of user zuul.
Nov 24 18:16:14 compute-0 systemd[1]: Started Session 12 of User zuul.
Nov 24 18:16:14 compute-0 sshd-session[58441]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 18:16:15 compute-0 sudo[58594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqmrokriakguqjcgkseiizhjmmmaqjku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008175.007718-22-264293967777329/AnsiballZ_file.py'
Nov 24 18:16:15 compute-0 sudo[58594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:15 compute-0 python3.9[58596]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:16:15 compute-0 sudo[58594]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:16 compute-0 sudo[58746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpqlctzersqfhoknvijcyvkaodhurbnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008175.8267186-34-146922593841782/AnsiballZ_stat.py'
Nov 24 18:16:16 compute-0 sudo[58746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:16 compute-0 python3.9[58748]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:16:16 compute-0 sudo[58746]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:16 compute-0 sudo[58869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfyexmcunbvxtgubdxolrtcylwtcbjnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008175.8267186-34-146922593841782/AnsiballZ_copy.py'
Nov 24 18:16:16 compute-0 sudo[58869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:17 compute-0 python3.9[58871]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764008175.8267186-34-146922593841782/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:16:17 compute-0 sudo[58869]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:17 compute-0 sshd-session[58444]: Connection closed by 192.168.122.30 port 37010
Nov 24 18:16:17 compute-0 sshd-session[58441]: pam_unix(sshd:session): session closed for user zuul
Nov 24 18:16:17 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Nov 24 18:16:17 compute-0 systemd[1]: session-12.scope: Consumed 1.534s CPU time.
Nov 24 18:16:17 compute-0 systemd-logind[822]: Session 12 logged out. Waiting for processes to exit.
Nov 24 18:16:17 compute-0 systemd-logind[822]: Removed session 12.
Nov 24 18:16:24 compute-0 sshd-session[58897]: Accepted publickey for zuul from 192.168.122.30 port 59998 ssh2: ECDSA SHA256:jrbRhTO0vQ5011EeK1ZbrK2vK+fcQyIDL9kUBqDHBYY
Nov 24 18:16:24 compute-0 systemd-logind[822]: New session 13 of user zuul.
Nov 24 18:16:24 compute-0 systemd[1]: Started Session 13 of User zuul.
Nov 24 18:16:24 compute-0 sshd-session[58897]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 18:16:25 compute-0 python3.9[59050]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:16:26 compute-0 sudo[59204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxbzyvsndwuixidbmkcmnqexienndenn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008185.8043978-33-194950446543466/AnsiballZ_file.py'
Nov 24 18:16:26 compute-0 sudo[59204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:26 compute-0 python3.9[59206]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:16:26 compute-0 sudo[59204]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:27 compute-0 sudo[59379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-megntfvmafpxpbsmfkhdctnkzqeydglj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008186.6649513-41-220773230077118/AnsiballZ_stat.py'
Nov 24 18:16:27 compute-0 sudo[59379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:27 compute-0 python3.9[59381]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:16:27 compute-0 sudo[59379]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:27 compute-0 sudo[59502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akiditnpwmrrpgldufhchhjcjmwoufxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008186.6649513-41-220773230077118/AnsiballZ_copy.py'
Nov 24 18:16:27 compute-0 sudo[59502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:28 compute-0 python3.9[59504]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764008186.6649513-41-220773230077118/.source.json _original_basename=.vze46rqq follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:16:28 compute-0 sudo[59502]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:28 compute-0 sudo[59654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejigkuqszgkkudoedvfeqzezhrlhgdjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008188.3607852-64-281176437560661/AnsiballZ_stat.py'
Nov 24 18:16:28 compute-0 sudo[59654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:28 compute-0 python3.9[59656]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:16:28 compute-0 sudo[59654]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:29 compute-0 sudo[59777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpgsdajejzuapydqloiqibqxwqthaxxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008188.3607852-64-281176437560661/AnsiballZ_copy.py'
Nov 24 18:16:29 compute-0 sudo[59777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:29 compute-0 python3.9[59779]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764008188.3607852-64-281176437560661/.source _original_basename=.peo0bio8 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:16:29 compute-0 sudo[59777]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:29 compute-0 sudo[59929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kotlywepcztnrllpulpxkcgzekghrcwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008189.6110537-80-30033631402123/AnsiballZ_file.py'
Nov 24 18:16:29 compute-0 sudo[59929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:30 compute-0 python3.9[59931]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:16:30 compute-0 sudo[59929]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:30 compute-0 sudo[60082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omzxgbmphslprvifsgfwbnnhkcayeiag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008190.2142744-88-93909974305958/AnsiballZ_stat.py'
Nov 24 18:16:30 compute-0 sudo[60082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:30 compute-0 python3.9[60084]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:16:30 compute-0 sudo[60082]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:30 compute-0 sudo[60205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arwowtqmvoutwogmgerxbaghynkyfyzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008190.2142744-88-93909974305958/AnsiballZ_copy.py'
Nov 24 18:16:30 compute-0 sudo[60205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:31 compute-0 python3.9[60207]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764008190.2142744-88-93909974305958/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:16:31 compute-0 sudo[60205]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:31 compute-0 sudo[60357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aboockqyanluuavfaufgmqfewigdzpfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008191.3352098-88-247569987972936/AnsiballZ_stat.py'
Nov 24 18:16:31 compute-0 sudo[60357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:31 compute-0 python3.9[60359]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:16:31 compute-0 sudo[60357]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:32 compute-0 sudo[60480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtnhfltdaidrwmudytxpzjageljzwfnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008191.3352098-88-247569987972936/AnsiballZ_copy.py'
Nov 24 18:16:32 compute-0 sudo[60480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:32 compute-0 python3.9[60482]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764008191.3352098-88-247569987972936/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:16:32 compute-0 sudo[60480]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:32 compute-0 sudo[60632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgzmrwuxentchjhsqombmcucbixhzrwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008192.5474024-117-241102555772093/AnsiballZ_file.py'
Nov 24 18:16:32 compute-0 sudo[60632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:33 compute-0 python3.9[60634]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:16:33 compute-0 sudo[60632]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:33 compute-0 sudo[60784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqglrghknfmadsbbbzoafllxvozxfwmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008193.2471132-125-51856586266958/AnsiballZ_stat.py'
Nov 24 18:16:33 compute-0 sudo[60784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:33 compute-0 python3.9[60786]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:16:33 compute-0 sudo[60784]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:33 compute-0 sudo[60907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgarerqhfcgcxtrzmlrpckuajzatnave ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008193.2471132-125-51856586266958/AnsiballZ_copy.py'
Nov 24 18:16:33 compute-0 sudo[60907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:34 compute-0 python3.9[60909]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764008193.2471132-125-51856586266958/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:16:34 compute-0 sudo[60907]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:34 compute-0 sudo[61059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tknelbnmwkrsjguehwuiqfwqpccflgxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008194.3182425-140-198818339648927/AnsiballZ_stat.py'
Nov 24 18:16:34 compute-0 sudo[61059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:34 compute-0 python3.9[61061]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:16:34 compute-0 sudo[61059]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:35 compute-0 sudo[61182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgxjhqlbpkbffrivpnlfutxupouzprhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008194.3182425-140-198818339648927/AnsiballZ_copy.py'
Nov 24 18:16:35 compute-0 sudo[61182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:35 compute-0 python3.9[61184]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764008194.3182425-140-198818339648927/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:16:35 compute-0 sudo[61182]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:36 compute-0 sudo[61334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wljsysiplhogoxxopjduburevdnmlipm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008195.420926-155-87747207991225/AnsiballZ_systemd.py'
Nov 24 18:16:36 compute-0 sudo[61334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:36 compute-0 python3.9[61336]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:16:36 compute-0 systemd[1]: Reloading.
Nov 24 18:16:36 compute-0 systemd-sysv-generator[61367]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:16:36 compute-0 systemd-rc-local-generator[61360]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:16:36 compute-0 systemd[1]: Reloading.
Nov 24 18:16:36 compute-0 systemd-rc-local-generator[61401]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:16:36 compute-0 systemd-sysv-generator[61405]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:16:36 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Nov 24 18:16:36 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Nov 24 18:16:36 compute-0 sudo[61334]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:37 compute-0 sudo[61562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skqbulncnctieubhesmgxxxcdhrxogyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008197.0021737-163-120840986693446/AnsiballZ_stat.py'
Nov 24 18:16:37 compute-0 sudo[61562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:37 compute-0 python3.9[61564]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:16:37 compute-0 sudo[61562]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:37 compute-0 sudo[61685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbnsccwdivcwmmphrqhxqwkimrmcgrye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008197.0021737-163-120840986693446/AnsiballZ_copy.py'
Nov 24 18:16:37 compute-0 sudo[61685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:37 compute-0 python3.9[61687]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764008197.0021737-163-120840986693446/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:16:37 compute-0 sudo[61685]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:38 compute-0 sudo[61837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqpmtspdlxrxaescswkhctrcprdzmsbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008198.0969722-178-151704414447763/AnsiballZ_stat.py'
Nov 24 18:16:38 compute-0 sudo[61837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:38 compute-0 python3.9[61839]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:16:38 compute-0 sudo[61837]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:38 compute-0 sudo[61960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgclfhngqjdfmcppsjvnejvtcbhrkzvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008198.0969722-178-151704414447763/AnsiballZ_copy.py'
Nov 24 18:16:38 compute-0 sudo[61960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:39 compute-0 python3.9[61962]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764008198.0969722-178-151704414447763/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:16:39 compute-0 sudo[61960]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:39 compute-0 sudo[62112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrdjxehvdtoufosrklamoitdjhalcazj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008199.1949024-193-70970397373141/AnsiballZ_systemd.py'
Nov 24 18:16:39 compute-0 sudo[62112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:39 compute-0 python3.9[62114]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:16:39 compute-0 systemd[1]: Reloading.
Nov 24 18:16:39 compute-0 systemd-sysv-generator[62142]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:16:39 compute-0 systemd-rc-local-generator[62136]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:16:40 compute-0 systemd[1]: Reloading.
Nov 24 18:16:40 compute-0 systemd-rc-local-generator[62175]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:16:40 compute-0 systemd-sysv-generator[62178]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:16:40 compute-0 systemd[1]: Starting Create netns directory...
Nov 24 18:16:40 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 24 18:16:40 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 24 18:16:40 compute-0 systemd[1]: Finished Create netns directory.
Nov 24 18:16:40 compute-0 sudo[62112]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:41 compute-0 python3.9[62339]: ansible-ansible.builtin.service_facts Invoked
Nov 24 18:16:41 compute-0 network[62356]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 18:16:41 compute-0 network[62357]: 'network-scripts' will be removed from distribution in near future.
Nov 24 18:16:41 compute-0 network[62358]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 18:16:44 compute-0 sudo[62618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omwwokbuckukretigagmegsmynonnfiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008204.6255865-209-221641377356019/AnsiballZ_systemd.py'
Nov 24 18:16:44 compute-0 sudo[62618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:45 compute-0 python3.9[62620]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:16:45 compute-0 systemd[1]: Reloading.
Nov 24 18:16:45 compute-0 systemd-rc-local-generator[62647]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:16:45 compute-0 systemd-sysv-generator[62652]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:16:45 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 24 18:16:45 compute-0 iptables.init[62659]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 24 18:16:45 compute-0 iptables.init[62659]: iptables: Flushing firewall rules: [  OK  ]
Nov 24 18:16:45 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Nov 24 18:16:45 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 24 18:16:45 compute-0 sudo[62618]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:46 compute-0 sudo[62854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unucnwkilxevthpxishakklpefonxobi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008205.820048-209-143989027463544/AnsiballZ_systemd.py'
Nov 24 18:16:46 compute-0 sudo[62854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:46 compute-0 python3.9[62856]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:16:46 compute-0 sudo[62854]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:46 compute-0 sudo[63008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adiomcnmgopqllbfxojedfyuigdczhlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008206.6641107-225-277326494167634/AnsiballZ_systemd.py'
Nov 24 18:16:46 compute-0 sudo[63008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:47 compute-0 python3.9[63010]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:16:47 compute-0 systemd[1]: Reloading.
Nov 24 18:16:47 compute-0 systemd-rc-local-generator[63038]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:16:47 compute-0 systemd-sysv-generator[63044]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:16:47 compute-0 systemd[1]: Starting Netfilter Tables...
Nov 24 18:16:47 compute-0 systemd[1]: Finished Netfilter Tables.
Nov 24 18:16:47 compute-0 sudo[63008]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:47 compute-0 sudo[63201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhjcqupsraeytvkhwszhyuwgcvmmlwac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008207.604459-233-167239137004853/AnsiballZ_command.py'
Nov 24 18:16:47 compute-0 sudo[63201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:48 compute-0 python3.9[63203]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:16:48 compute-0 sudo[63201]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:48 compute-0 sudo[63354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqewyvctaahdysjpbzxfjpradnfiabuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008208.4958873-247-68794356462798/AnsiballZ_stat.py'
Nov 24 18:16:48 compute-0 sudo[63354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:48 compute-0 python3.9[63356]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:16:48 compute-0 sudo[63354]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:49 compute-0 sudo[63479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmgubqfmbuzwxjenbbjpeollfboocpjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008208.4958873-247-68794356462798/AnsiballZ_copy.py'
Nov 24 18:16:49 compute-0 sudo[63479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:49 compute-0 python3.9[63481]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764008208.4958873-247-68794356462798/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:16:49 compute-0 sudo[63479]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:49 compute-0 sudo[63632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnyysemtlnsjgxtagenfseqdfzrpfpef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008209.624282-262-66058646411767/AnsiballZ_systemd.py'
Nov 24 18:16:49 compute-0 sudo[63632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:50 compute-0 python3.9[63634]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 18:16:50 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Nov 24 18:16:50 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Nov 24 18:16:50 compute-0 sshd[1009]: Received SIGHUP; restarting.
Nov 24 18:16:50 compute-0 sshd[1009]: Server listening on 0.0.0.0 port 22.
Nov 24 18:16:50 compute-0 sshd[1009]: Server listening on :: port 22.
Nov 24 18:16:50 compute-0 sudo[63632]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:50 compute-0 sudo[63788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgjcjcoxqvtjbiunlsaapynhctiebjeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008210.3712118-270-191697426146906/AnsiballZ_file.py'
Nov 24 18:16:50 compute-0 sudo[63788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:50 compute-0 python3.9[63790]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:16:50 compute-0 sudo[63788]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:51 compute-0 sudo[63940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gotichidrmykvdhhoynuqvsquiokkchd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008210.9469914-278-152266197092233/AnsiballZ_stat.py'
Nov 24 18:16:51 compute-0 sudo[63940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:51 compute-0 python3.9[63942]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:16:51 compute-0 sudo[63940]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:51 compute-0 sudo[64063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izeqotwtztvdgugcdckbunivgivcbynt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008210.9469914-278-152266197092233/AnsiballZ_copy.py'
Nov 24 18:16:51 compute-0 sudo[64063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:52 compute-0 python3.9[64065]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764008210.9469914-278-152266197092233/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:16:52 compute-0 sudo[64063]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:52 compute-0 sudo[64215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uosbtunbsnhfyptjxgodidnkiwltpjqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008212.27607-296-111689441824756/AnsiballZ_timezone.py'
Nov 24 18:16:52 compute-0 sudo[64215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:52 compute-0 python3.9[64217]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 24 18:16:52 compute-0 systemd[1]: Starting Time & Date Service...
Nov 24 18:16:53 compute-0 systemd[1]: Started Time & Date Service.
Nov 24 18:16:53 compute-0 sudo[64215]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:53 compute-0 sudo[64371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pihzoduulaxxdvykiphkqqztamvxvlqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008213.2714396-305-109045840215221/AnsiballZ_file.py'
Nov 24 18:16:53 compute-0 sudo[64371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:53 compute-0 python3.9[64373]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:16:53 compute-0 sudo[64371]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:54 compute-0 sudo[64523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykmgujzzkskyayemgzqffivjnmhsaqmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008213.90673-313-271355312258921/AnsiballZ_stat.py'
Nov 24 18:16:54 compute-0 sudo[64523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:54 compute-0 python3.9[64525]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:16:54 compute-0 sudo[64523]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:54 compute-0 sudo[64646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eitxboabosaoqvbxvioyeqzutfcekumb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008213.90673-313-271355312258921/AnsiballZ_copy.py'
Nov 24 18:16:54 compute-0 sudo[64646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:54 compute-0 python3.9[64648]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764008213.90673-313-271355312258921/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:16:54 compute-0 sudo[64646]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:55 compute-0 sudo[64798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqegzbacktmlwelrmedcgkqnnclkjsax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008214.9693563-328-10373998215694/AnsiballZ_stat.py'
Nov 24 18:16:55 compute-0 sudo[64798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:55 compute-0 python3.9[64800]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:16:55 compute-0 sudo[64798]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:55 compute-0 sudo[64921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uipxbljbriiiuffjsbqhxwgugaoznqjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008214.9693563-328-10373998215694/AnsiballZ_copy.py'
Nov 24 18:16:55 compute-0 sudo[64921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:55 compute-0 python3.9[64923]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764008214.9693563-328-10373998215694/.source.yaml _original_basename=.x6g0j1le follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:16:55 compute-0 sudo[64921]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:56 compute-0 sudo[65073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quusdeptmgrqibdgrxhomrsxrnhydwtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008216.1166244-343-206024355879376/AnsiballZ_stat.py'
Nov 24 18:16:56 compute-0 sudo[65073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:56 compute-0 python3.9[65075]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:16:56 compute-0 sudo[65073]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:56 compute-0 sudo[65196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjlzavoaojoegqpubvxyzldnquioejbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008216.1166244-343-206024355879376/AnsiballZ_copy.py'
Nov 24 18:16:56 compute-0 sudo[65196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:57 compute-0 python3.9[65198]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764008216.1166244-343-206024355879376/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:16:57 compute-0 sudo[65196]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:57 compute-0 sudo[65348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bupodrsvyqsuijwaedaedzcmptttutwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008217.2621257-358-34910225817658/AnsiballZ_command.py'
Nov 24 18:16:57 compute-0 sudo[65348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:57 compute-0 python3.9[65350]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:16:57 compute-0 sudo[65348]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:58 compute-0 sudo[65501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzbhycyrsmcvqviywexkavqatgzcszja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008217.850338-366-123375627604744/AnsiballZ_command.py'
Nov 24 18:16:58 compute-0 sudo[65501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:58 compute-0 python3.9[65503]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:16:58 compute-0 sudo[65501]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:58 compute-0 sudo[65654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhspswoeczyynagkpfryaajbsczyxfwp ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764008218.4415698-374-242686912004280/AnsiballZ_edpm_nftables_from_files.py'
Nov 24 18:16:58 compute-0 sudo[65654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:58 compute-0 python3[65656]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 24 18:16:59 compute-0 sudo[65654]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:59 compute-0 sudo[65806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxutafhtactgejhdkrtqmqyxxmcufuvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008219.1507802-382-50136920939963/AnsiballZ_stat.py'
Nov 24 18:16:59 compute-0 sudo[65806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:16:59 compute-0 python3.9[65808]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:16:59 compute-0 sudo[65806]: pam_unix(sudo:session): session closed for user root
Nov 24 18:16:59 compute-0 sudo[65929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pimppunbgbuodjlongkbftduzeonlftv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008219.1507802-382-50136920939963/AnsiballZ_copy.py'
Nov 24 18:16:59 compute-0 sudo[65929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:17:00 compute-0 python3.9[65931]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764008219.1507802-382-50136920939963/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:17:00 compute-0 sudo[65929]: pam_unix(sudo:session): session closed for user root
Nov 24 18:17:00 compute-0 sudo[66081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilfwzpcmlanlotjllzewbkwabtnngfpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008220.2008538-397-246070300636422/AnsiballZ_stat.py'
Nov 24 18:17:00 compute-0 sudo[66081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:17:00 compute-0 python3.9[66083]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:17:00 compute-0 sudo[66081]: pam_unix(sudo:session): session closed for user root
Nov 24 18:17:00 compute-0 sudo[66204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlfbmoqtnqzksswxzmfxfwrxfgyfhrum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008220.2008538-397-246070300636422/AnsiballZ_copy.py'
Nov 24 18:17:00 compute-0 sudo[66204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:17:01 compute-0 python3.9[66206]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764008220.2008538-397-246070300636422/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:17:01 compute-0 sudo[66204]: pam_unix(sudo:session): session closed for user root
Nov 24 18:17:01 compute-0 sudo[66356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbolfkfpxwqlkcczyzpdcoeycpvwukwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008221.269204-412-214075281157543/AnsiballZ_stat.py'
Nov 24 18:17:01 compute-0 sudo[66356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:17:01 compute-0 python3.9[66358]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:17:01 compute-0 sudo[66356]: pam_unix(sudo:session): session closed for user root
Nov 24 18:17:02 compute-0 sudo[66479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miexpnbhgqkgnidihywrwhnmwoeounea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008221.269204-412-214075281157543/AnsiballZ_copy.py'
Nov 24 18:17:02 compute-0 sudo[66479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:17:02 compute-0 python3.9[66481]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764008221.269204-412-214075281157543/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:17:02 compute-0 sudo[66479]: pam_unix(sudo:session): session closed for user root
Nov 24 18:17:03 compute-0 sudo[66631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrwfmcqyjvujdwgkmsbvsamoztxgqaar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008222.67422-427-14124389126245/AnsiballZ_stat.py'
Nov 24 18:17:03 compute-0 sudo[66631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:17:03 compute-0 python3.9[66633]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:17:03 compute-0 sudo[66631]: pam_unix(sudo:session): session closed for user root
Nov 24 18:17:03 compute-0 sudo[66754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnhlllgeznodvzqyarzteaclxxwfrbpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008222.67422-427-14124389126245/AnsiballZ_copy.py'
Nov 24 18:17:03 compute-0 sudo[66754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:17:04 compute-0 python3.9[66756]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764008222.67422-427-14124389126245/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:17:04 compute-0 sudo[66754]: pam_unix(sudo:session): session closed for user root
Nov 24 18:17:04 compute-0 sudo[66906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwkthiwvfcocxgnrhnvpajtxuesdtrwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008224.3519802-442-89376329280697/AnsiballZ_stat.py'
Nov 24 18:17:04 compute-0 sudo[66906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:17:05 compute-0 python3.9[66908]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:17:05 compute-0 sudo[66906]: pam_unix(sudo:session): session closed for user root
Nov 24 18:17:05 compute-0 sudo[67029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yijwxoaxufzzbhsqqndpdfbmzmbvbkkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008224.3519802-442-89376329280697/AnsiballZ_copy.py'
Nov 24 18:17:05 compute-0 sudo[67029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:17:05 compute-0 python3.9[67031]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764008224.3519802-442-89376329280697/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:17:05 compute-0 sudo[67029]: pam_unix(sudo:session): session closed for user root
Nov 24 18:17:06 compute-0 sudo[67181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxdlodrsxazbzchsazerkmjoylizfhwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008225.840825-457-159621173804491/AnsiballZ_file.py'
Nov 24 18:17:06 compute-0 sudo[67181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:17:06 compute-0 python3.9[67183]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:17:06 compute-0 sudo[67181]: pam_unix(sudo:session): session closed for user root
Nov 24 18:17:06 compute-0 sudo[67333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsjlenlkfpogrwuombfsburbflpwdivc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008226.4964168-465-132831440109349/AnsiballZ_command.py'
Nov 24 18:17:06 compute-0 sudo[67333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:17:06 compute-0 python3.9[67335]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:17:07 compute-0 sudo[67333]: pam_unix(sudo:session): session closed for user root
Nov 24 18:17:07 compute-0 sudo[67492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-achykogulwgmoxxwjimtcvxytdohqzkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008227.2030358-473-185725770758718/AnsiballZ_blockinfile.py'
Nov 24 18:17:07 compute-0 sudo[67492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:17:07 compute-0 python3.9[67494]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:17:07 compute-0 sudo[67492]: pam_unix(sudo:session): session closed for user root
Nov 24 18:17:08 compute-0 sudo[67645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enrtazmspjyrrchxackwlmqpdzfjjsji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008228.0801787-482-84101932470084/AnsiballZ_file.py'
Nov 24 18:17:08 compute-0 sudo[67645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:17:08 compute-0 python3.9[67647]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:17:08 compute-0 sudo[67645]: pam_unix(sudo:session): session closed for user root
Nov 24 18:17:08 compute-0 sudo[67797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-przizeufaagmqtjaacepdxwlojsraimc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008228.691703-482-229148268114368/AnsiballZ_file.py'
Nov 24 18:17:08 compute-0 sudo[67797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:17:09 compute-0 python3.9[67799]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:17:09 compute-0 sudo[67797]: pam_unix(sudo:session): session closed for user root
Nov 24 18:17:09 compute-0 sudo[67949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imzyxnwejzgaimrvcvewfovjhqcxhghq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008229.349731-497-190392927208517/AnsiballZ_mount.py'
Nov 24 18:17:09 compute-0 sudo[67949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:17:10 compute-0 python3.9[67951]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 24 18:17:10 compute-0 sudo[67949]: pam_unix(sudo:session): session closed for user root
Nov 24 18:17:10 compute-0 sudo[68102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abcangzkitwscegxbzxmdipbbprmdvdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008230.250295-497-40333559748687/AnsiballZ_mount.py'
Nov 24 18:17:10 compute-0 sudo[68102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:17:10 compute-0 python3.9[68104]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 24 18:17:10 compute-0 sudo[68102]: pam_unix(sudo:session): session closed for user root
Nov 24 18:17:11 compute-0 sshd-session[58900]: Connection closed by 192.168.122.30 port 59998
Nov 24 18:17:11 compute-0 sshd-session[58897]: pam_unix(sshd:session): session closed for user zuul
Nov 24 18:17:11 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Nov 24 18:17:11 compute-0 systemd[1]: session-13.scope: Consumed 33.906s CPU time.
Nov 24 18:17:11 compute-0 systemd-logind[822]: Session 13 logged out. Waiting for processes to exit.
Nov 24 18:17:11 compute-0 systemd-logind[822]: Removed session 13.
Nov 24 18:17:16 compute-0 sshd-session[68130]: Accepted publickey for zuul from 192.168.122.30 port 51434 ssh2: ECDSA SHA256:jrbRhTO0vQ5011EeK1ZbrK2vK+fcQyIDL9kUBqDHBYY
Nov 24 18:17:16 compute-0 systemd-logind[822]: New session 14 of user zuul.
Nov 24 18:17:16 compute-0 systemd[1]: Started Session 14 of User zuul.
Nov 24 18:17:16 compute-0 sshd-session[68130]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 18:17:16 compute-0 sudo[68283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssqbbsqbdqxhnwmyymhlrzynmoofxfwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008236.4412634-16-267905370315442/AnsiballZ_tempfile.py'
Nov 24 18:17:16 compute-0 sudo[68283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:17:17 compute-0 python3.9[68285]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 24 18:17:17 compute-0 sudo[68283]: pam_unix(sudo:session): session closed for user root
Nov 24 18:17:17 compute-0 sudo[68435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgidtlfydexqgoiovuqegzwdxclqgzom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008237.2805827-28-176594879427008/AnsiballZ_stat.py'
Nov 24 18:17:17 compute-0 sudo[68435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:17:17 compute-0 python3.9[68437]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:17:17 compute-0 sudo[68435]: pam_unix(sudo:session): session closed for user root
Nov 24 18:17:18 compute-0 sudo[68587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eghbfuaugjpuqqdoetgahlwjwfzadpsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008238.169942-38-29706846071082/AnsiballZ_setup.py'
Nov 24 18:17:18 compute-0 sudo[68587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:17:18 compute-0 python3.9[68589]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:17:18 compute-0 sudo[68587]: pam_unix(sudo:session): session closed for user root
Nov 24 18:17:19 compute-0 sudo[68739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sacvawenyamkcewlmqcuytkbtpdolmgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008239.2097316-47-216225842467805/AnsiballZ_blockinfile.py'
Nov 24 18:17:19 compute-0 sudo[68739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:17:19 compute-0 python3.9[68741]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDhS8frVtJkphIV3qjYEBaOrfFAUD1SVRr7LLCHE4Oz5qMeQHKYm90YB9nO7ntC/BIXenfYoTm6fYVn1JaiGoGSQdRBXPQG/o6WD6Ec3pD/Mcl/KMJGYuMHxaEizMQ3wOpo20hOTbEsu6v2y+3ETjeAG0UF9fWh/vCDy6bX0hMh8o7mf9skIV8gvWuCbJo4Vk92qBh7z9qccV5j5J5maU9c28+VEF1nlN0GSyYT/IRFdD7gDE7QFZ9QpapaWGSFE7nCTgz4Mw4nnJ+KaxvkxxHf4knCpDxk59+uk/+9G8oUiFokkDbJiPI6sZS+BALztR/CzJpNrAYaYmhzjbSRYb51wPj5EnXYzqgik4JzhmsqsepLD79RGK2b4ZWnQVP7WFOUL+Wm4+MkbF0LVmcy1XJeA5yhmhodU+fpO1t1SZRONc1eqep1NVqxMOHXOQgKGpIAg95Vpx9szp5NhOkzp1cQTeEhxfog0RyENmd9NxKBpu3NmtFN+dETuLT2Co1JMhM=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA2lZlyCN0FJ/jD1EDSdkabXa5aE54G6xn7+v3fPL+BD
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHFHJ7xweyewLWbij/U6h4iEFO2zmE+OAqJetXAaVahyXo6KOKB5z+dQ1ItOa9RPE9AAjyAVton3sCrkTSjqY88=
                                             create=True mode=0644 path=/tmp/ansible.10_8toja state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:17:19 compute-0 sudo[68739]: pam_unix(sudo:session): session closed for user root
Nov 24 18:17:20 compute-0 sudo[68891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eufreynkjmwcaeoiayprudgealhccoeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008240.002072-55-267628251534974/AnsiballZ_command.py'
Nov 24 18:17:20 compute-0 sudo[68891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:17:20 compute-0 python3.9[68893]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.10_8toja' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:17:20 compute-0 sudo[68891]: pam_unix(sudo:session): session closed for user root
Nov 24 18:17:21 compute-0 sudo[69045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avpbaabldzvdzwgnxneefpssnijwwybc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008240.9270804-63-255496717024149/AnsiballZ_file.py'
Nov 24 18:17:21 compute-0 sudo[69045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:17:21 compute-0 python3.9[69047]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.10_8toja state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:17:21 compute-0 sudo[69045]: pam_unix(sudo:session): session closed for user root
Nov 24 18:17:21 compute-0 sshd-session[68133]: Connection closed by 192.168.122.30 port 51434
Nov 24 18:17:21 compute-0 sshd-session[68130]: pam_unix(sshd:session): session closed for user zuul
Nov 24 18:17:21 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Nov 24 18:17:21 compute-0 systemd[1]: session-14.scope: Consumed 3.360s CPU time.
Nov 24 18:17:21 compute-0 systemd-logind[822]: Session 14 logged out. Waiting for processes to exit.
Nov 24 18:17:21 compute-0 systemd-logind[822]: Removed session 14.
Nov 24 18:17:23 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 24 18:17:27 compute-0 sshd-session[69075]: Accepted publickey for zuul from 192.168.122.30 port 60004 ssh2: ECDSA SHA256:jrbRhTO0vQ5011EeK1ZbrK2vK+fcQyIDL9kUBqDHBYY
Nov 24 18:17:27 compute-0 systemd-logind[822]: New session 15 of user zuul.
Nov 24 18:17:27 compute-0 systemd[1]: Started Session 15 of User zuul.
Nov 24 18:17:27 compute-0 sshd-session[69075]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 18:17:28 compute-0 python3.9[69228]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:17:29 compute-0 sudo[69382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjinxiynmnyarrelbkmhgemgizfsngbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008248.8796082-32-221347988126547/AnsiballZ_systemd.py'
Nov 24 18:17:29 compute-0 sudo[69382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:17:29 compute-0 python3.9[69384]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 24 18:17:29 compute-0 sudo[69382]: pam_unix(sudo:session): session closed for user root
Nov 24 18:17:30 compute-0 sudo[69536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eaioitlhlfedjlgywrjrwdyxnduxkwvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008249.9734056-40-201231833122810/AnsiballZ_systemd.py'
Nov 24 18:17:30 compute-0 sudo[69536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:17:30 compute-0 python3.9[69538]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 18:17:30 compute-0 sudo[69536]: pam_unix(sudo:session): session closed for user root
Nov 24 18:17:31 compute-0 sudo[69689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqzktmibsbdzygwcefjnquexpeupptbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008250.7746427-49-169583207829582/AnsiballZ_command.py'
Nov 24 18:17:31 compute-0 sudo[69689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:17:31 compute-0 python3.9[69691]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:17:31 compute-0 sudo[69689]: pam_unix(sudo:session): session closed for user root
Nov 24 18:17:31 compute-0 sudo[69842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilvpwyandwefqdxabwffjnqeplcodhix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008251.5330148-57-208872163585537/AnsiballZ_stat.py'
Nov 24 18:17:31 compute-0 sudo[69842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:17:32 compute-0 python3.9[69844]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:17:32 compute-0 sudo[69842]: pam_unix(sudo:session): session closed for user root
Nov 24 18:17:32 compute-0 sudo[69996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljgqxictxgicappxlnzujaubpmwmauzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008252.297565-65-160341735444824/AnsiballZ_command.py'
Nov 24 18:17:32 compute-0 sudo[69996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:17:32 compute-0 python3.9[69998]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:17:32 compute-0 sudo[69996]: pam_unix(sudo:session): session closed for user root
Nov 24 18:17:33 compute-0 sudo[70151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkcmxhxozzhqoiasnutwypifgwmuiafe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008252.9238472-73-32007279050382/AnsiballZ_file.py'
Nov 24 18:17:33 compute-0 sudo[70151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:17:33 compute-0 python3.9[70153]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:17:33 compute-0 sudo[70151]: pam_unix(sudo:session): session closed for user root
Nov 24 18:17:33 compute-0 sshd-session[69078]: Connection closed by 192.168.122.30 port 60004
Nov 24 18:17:33 compute-0 sshd-session[69075]: pam_unix(sshd:session): session closed for user zuul
Nov 24 18:17:33 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Nov 24 18:17:33 compute-0 systemd[1]: session-15.scope: Consumed 4.373s CPU time.
Nov 24 18:17:33 compute-0 systemd-logind[822]: Session 15 logged out. Waiting for processes to exit.
Nov 24 18:17:33 compute-0 systemd-logind[822]: Removed session 15.
Nov 24 18:17:39 compute-0 sshd-session[70178]: Accepted publickey for zuul from 192.168.122.30 port 53454 ssh2: ECDSA SHA256:jrbRhTO0vQ5011EeK1ZbrK2vK+fcQyIDL9kUBqDHBYY
Nov 24 18:17:39 compute-0 systemd-logind[822]: New session 16 of user zuul.
Nov 24 18:17:39 compute-0 systemd[1]: Started Session 16 of User zuul.
Nov 24 18:17:39 compute-0 sshd-session[70178]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 18:17:40 compute-0 python3.9[70331]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:17:41 compute-0 sudo[70485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lokhldofbpfzhrtykxemztglmjajmwwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008261.2435932-34-231851159342202/AnsiballZ_setup.py'
Nov 24 18:17:41 compute-0 sudo[70485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:17:41 compute-0 python3.9[70487]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 18:17:42 compute-0 sudo[70485]: pam_unix(sudo:session): session closed for user root
Nov 24 18:17:42 compute-0 sudo[70569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npcazhhmvscdtemobbhcclmfdgfeulwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008261.2435932-34-231851159342202/AnsiballZ_dnf.py'
Nov 24 18:17:42 compute-0 sudo[70569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:17:42 compute-0 python3.9[70571]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 24 18:17:43 compute-0 sudo[70569]: pam_unix(sudo:session): session closed for user root
Nov 24 18:17:44 compute-0 python3.9[70722]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:17:45 compute-0 python3.9[70873]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 18:17:46 compute-0 python3.9[71023]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:17:46 compute-0 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 18:17:47 compute-0 python3.9[71174]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:17:47 compute-0 sshd-session[70181]: Connection closed by 192.168.122.30 port 53454
Nov 24 18:17:47 compute-0 sshd-session[70178]: pam_unix(sshd:session): session closed for user zuul
Nov 24 18:17:47 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Nov 24 18:17:47 compute-0 systemd[1]: session-16.scope: Consumed 5.840s CPU time.
Nov 24 18:17:47 compute-0 systemd-logind[822]: Session 16 logged out. Waiting for processes to exit.
Nov 24 18:17:47 compute-0 systemd-logind[822]: Removed session 16.
Nov 24 18:17:55 compute-0 sshd-session[71199]: Accepted publickey for zuul from 38.102.83.41 port 49908 ssh2: RSA SHA256:hSQOID5Ghp9Ra3Xg4ItfWrKou3AexdidDUUIPh+xbVY
Nov 24 18:17:55 compute-0 systemd-logind[822]: New session 17 of user zuul.
Nov 24 18:17:55 compute-0 systemd[1]: Started Session 17 of User zuul.
Nov 24 18:17:55 compute-0 sshd-session[71199]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 18:17:55 compute-0 sudo[71275]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uupjwbrweasseziqzgkpzujnsennecqc ; /usr/bin/python3'
Nov 24 18:17:55 compute-0 sudo[71275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:17:55 compute-0 useradd[71279]: new group: name=ceph-admin, GID=42478
Nov 24 18:17:55 compute-0 useradd[71279]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Nov 24 18:17:55 compute-0 sudo[71275]: pam_unix(sudo:session): session closed for user root
Nov 24 18:17:56 compute-0 sudo[71361]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztnyzvfrhnaoaweuyptsyqamsbnlipfj ; /usr/bin/python3'
Nov 24 18:17:56 compute-0 sudo[71361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:17:56 compute-0 sudo[71361]: pam_unix(sudo:session): session closed for user root
Nov 24 18:17:56 compute-0 sudo[71434]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rubzflgpmvmqxcobcqjyglceakdgndyh ; /usr/bin/python3'
Nov 24 18:17:56 compute-0 sudo[71434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:17:56 compute-0 sudo[71434]: pam_unix(sudo:session): session closed for user root
Nov 24 18:17:57 compute-0 sudo[71484]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofalbxwdgbmcgwduceyhedsjwetjeaab ; /usr/bin/python3'
Nov 24 18:17:57 compute-0 sudo[71484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:17:57 compute-0 sudo[71484]: pam_unix(sudo:session): session closed for user root
Nov 24 18:17:57 compute-0 sudo[71510]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnbgrpmzstluydyismoaqtrsyrtsquan ; /usr/bin/python3'
Nov 24 18:17:57 compute-0 sudo[71510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:17:57 compute-0 sudo[71510]: pam_unix(sudo:session): session closed for user root
Nov 24 18:17:57 compute-0 sudo[71536]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjynptxjdjvhbsrdresbriryrrultrtp ; /usr/bin/python3'
Nov 24 18:17:57 compute-0 sudo[71536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:17:58 compute-0 sudo[71536]: pam_unix(sudo:session): session closed for user root
Nov 24 18:17:58 compute-0 sudo[71562]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwwwwfzqdpbhntmurhwxbtmrhgaapvuw ; /usr/bin/python3'
Nov 24 18:17:58 compute-0 sudo[71562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:17:58 compute-0 sudo[71562]: pam_unix(sudo:session): session closed for user root
Nov 24 18:17:58 compute-0 sudo[71640]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntixplmceakyghaatwoimktpmdtdbvlu ; /usr/bin/python3'
Nov 24 18:17:58 compute-0 sudo[71640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:17:58 compute-0 sudo[71640]: pam_unix(sudo:session): session closed for user root
Nov 24 18:17:59 compute-0 sudo[71713]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggraqfuumagsktqwhopvgtnjiqasfdof ; /usr/bin/python3'
Nov 24 18:17:59 compute-0 sudo[71713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:17:59 compute-0 sudo[71713]: pam_unix(sudo:session): session closed for user root
Nov 24 18:17:59 compute-0 sudo[71815]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyfhjuidzazxasjndmwqrwetpztgdela ; /usr/bin/python3'
Nov 24 18:17:59 compute-0 sudo[71815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:17:59 compute-0 sudo[71815]: pam_unix(sudo:session): session closed for user root
Nov 24 18:17:59 compute-0 sudo[71888]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qswhpruvrsacbhjmrfxltiqxumhvkulg ; /usr/bin/python3'
Nov 24 18:17:59 compute-0 sudo[71888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:18:00 compute-0 sudo[71888]: pam_unix(sudo:session): session closed for user root
Nov 24 18:18:00 compute-0 sudo[71938]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baoavffxcpajwxpvifkqtpqlppriwnaq ; /usr/bin/python3'
Nov 24 18:18:00 compute-0 sudo[71938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:18:00 compute-0 python3[71940]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:18:01 compute-0 sudo[71938]: pam_unix(sudo:session): session closed for user root
Nov 24 18:18:02 compute-0 sudo[72033]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhfznphvlurpfjnxcrqatnbqkhddufgq ; /usr/bin/python3'
Nov 24 18:18:02 compute-0 sudo[72033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:18:02 compute-0 python3[72035]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 24 18:18:03 compute-0 sudo[72033]: pam_unix(sudo:session): session closed for user root
Nov 24 18:18:03 compute-0 sudo[72060]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqleghgjtmwakfcdqwwlgrnoqlhgjnou ; /usr/bin/python3'
Nov 24 18:18:03 compute-0 sudo[72060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:18:03 compute-0 python3[72062]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 24 18:18:03 compute-0 sudo[72060]: pam_unix(sudo:session): session closed for user root
Nov 24 18:18:04 compute-0 sudo[72086]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtlonlzpnpkyurmqbhertzibufyblxjq ; /usr/bin/python3'
Nov 24 18:18:04 compute-0 sudo[72086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:18:04 compute-0 python3[72088]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:18:04 compute-0 kernel: loop: module loaded
Nov 24 18:18:04 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Nov 24 18:18:04 compute-0 sudo[72086]: pam_unix(sudo:session): session closed for user root
Nov 24 18:18:04 compute-0 sudo[72120]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxwvchpggjgygifkgigfyvbpqxivfmnj ; /usr/bin/python3'
Nov 24 18:18:04 compute-0 sudo[72120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:18:04 compute-0 python3[72122]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:18:04 compute-0 lvm[72125]: PV /dev/loop3 not used.
Nov 24 18:18:04 compute-0 lvm[72134]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 18:18:04 compute-0 sudo[72120]: pam_unix(sudo:session): session closed for user root
Nov 24 18:18:04 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Nov 24 18:18:04 compute-0 lvm[72136]:   1 logical volume(s) in volume group "ceph_vg0" now active
Nov 24 18:18:04 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Nov 24 18:18:05 compute-0 sudo[72212]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ushlcpczaafcuotuxqkishksehhjbqhu ; /usr/bin/python3'
Nov 24 18:18:05 compute-0 sudo[72212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:18:05 compute-0 python3[72214]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 18:18:05 compute-0 sudo[72212]: pam_unix(sudo:session): session closed for user root
Nov 24 18:18:05 compute-0 sudo[72285]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukcwmqvlabnptixuukiangiebojgjfcv ; /usr/bin/python3'
Nov 24 18:18:05 compute-0 sudo[72285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:18:05 compute-0 python3[72287]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764008284.9248202-36414-77586244552422/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:18:05 compute-0 sudo[72285]: pam_unix(sudo:session): session closed for user root
Nov 24 18:18:06 compute-0 sudo[72335]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irleuqbfwhmunfqgqqmssobmrdtzqhft ; /usr/bin/python3'
Nov 24 18:18:06 compute-0 sudo[72335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:18:06 compute-0 python3[72337]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:18:06 compute-0 systemd[1]: Reloading.
Nov 24 18:18:06 compute-0 systemd-sysv-generator[72371]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:18:06 compute-0 systemd-rc-local-generator[72368]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:18:06 compute-0 systemd[1]: Starting Ceph OSD losetup...
Nov 24 18:18:06 compute-0 bash[72378]: /dev/loop3: [64513]:4194936 (/var/lib/ceph-osd-0.img)
Nov 24 18:18:06 compute-0 systemd[1]: Finished Ceph OSD losetup.
Nov 24 18:18:06 compute-0 sudo[72335]: pam_unix(sudo:session): session closed for user root
Nov 24 18:18:06 compute-0 lvm[72379]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 18:18:06 compute-0 lvm[72379]: VG ceph_vg0 finished
Nov 24 18:18:06 compute-0 sudo[72403]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiscxbiteycrarhqacuhbtqalufzwykt ; /usr/bin/python3'
Nov 24 18:18:06 compute-0 sudo[72403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:18:07 compute-0 python3[72405]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 24 18:18:08 compute-0 sudo[72403]: pam_unix(sudo:session): session closed for user root
Nov 24 18:18:08 compute-0 sudo[72430]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atmybzipszoqlaprjkeivkvjpzlcwwee ; /usr/bin/python3'
Nov 24 18:18:08 compute-0 sudo[72430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:18:08 compute-0 python3[72432]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 24 18:18:08 compute-0 sudo[72430]: pam_unix(sudo:session): session closed for user root
Nov 24 18:18:08 compute-0 sudo[72456]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrxyivdgrhakgajcyzhqoectwvolinqf ; /usr/bin/python3'
Nov 24 18:18:08 compute-0 sudo[72456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:18:09 compute-0 python3[72458]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G
                                          losetup /dev/loop4 /var/lib/ceph-osd-1.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:18:09 compute-0 kernel: loop4: detected capacity change from 0 to 41943040
Nov 24 18:18:09 compute-0 sudo[72456]: pam_unix(sudo:session): session closed for user root
Nov 24 18:18:09 compute-0 sudo[72488]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnthwligodrngwrkkzwutlnnpfmkimrr ; /usr/bin/python3'
Nov 24 18:18:09 compute-0 sudo[72488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:18:09 compute-0 python3[72490]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4
                                          vgcreate ceph_vg1 /dev/loop4
                                          lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:18:09 compute-0 lvm[72493]: PV /dev/loop4 not used.
Nov 24 18:18:09 compute-0 lvm[72503]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 24 18:18:09 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Nov 24 18:18:09 compute-0 lvm[72505]:   1 logical volume(s) in volume group "ceph_vg1" now active
Nov 24 18:18:09 compute-0 sudo[72488]: pam_unix(sudo:session): session closed for user root
Nov 24 18:18:09 compute-0 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Nov 24 18:18:09 compute-0 sudo[72581]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-coqesqyebchxdljquhzhawpxamnxgxjr ; /usr/bin/python3'
Nov 24 18:18:09 compute-0 sudo[72581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:18:10 compute-0 python3[72583]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 18:18:10 compute-0 sudo[72581]: pam_unix(sudo:session): session closed for user root
Nov 24 18:18:10 compute-0 sudo[72654]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwsmiijvpqqnvqmckekkkdbataojprlu ; /usr/bin/python3'
Nov 24 18:18:10 compute-0 sudo[72654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:18:10 compute-0 python3[72656]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764008289.8131237-36441-112166416995885/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:18:10 compute-0 sudo[72654]: pam_unix(sudo:session): session closed for user root
Nov 24 18:18:10 compute-0 sudo[72704]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbmxsmlnxqkybgzvscpppyqrzoazkmct ; /usr/bin/python3'
Nov 24 18:18:10 compute-0 sudo[72704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:18:11 compute-0 python3[72706]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:18:11 compute-0 systemd[1]: Reloading.
Nov 24 18:18:11 compute-0 systemd-rc-local-generator[72733]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:18:11 compute-0 systemd-sysv-generator[72736]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:18:11 compute-0 systemd[1]: Starting Ceph OSD losetup...
Nov 24 18:18:11 compute-0 bash[72746]: /dev/loop4: [64513]:4328009 (/var/lib/ceph-osd-1.img)
Nov 24 18:18:11 compute-0 systemd[1]: Finished Ceph OSD losetup.
Nov 24 18:18:11 compute-0 lvm[72747]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 24 18:18:11 compute-0 lvm[72747]: VG ceph_vg1 finished
Nov 24 18:18:11 compute-0 sudo[72704]: pam_unix(sudo:session): session closed for user root
Nov 24 18:18:11 compute-0 sudo[72771]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmxpirubjzgclgozdjdlsmsqhwegdwlk ; /usr/bin/python3'
Nov 24 18:18:11 compute-0 sudo[72771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:18:11 compute-0 python3[72773]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 24 18:18:12 compute-0 sudo[72771]: pam_unix(sudo:session): session closed for user root
Nov 24 18:18:13 compute-0 sudo[72798]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrxgxqsabpteewbgwbfdxhenojfxzerk ; /usr/bin/python3'
Nov 24 18:18:13 compute-0 sudo[72798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:18:13 compute-0 python3[72800]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 24 18:18:13 compute-0 sudo[72798]: pam_unix(sudo:session): session closed for user root
Nov 24 18:18:13 compute-0 sudo[72824]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfcssflwdhbfjkjwxjohuszgoehirtod ; /usr/bin/python3'
Nov 24 18:18:13 compute-0 sudo[72824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:18:13 compute-0 python3[72826]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G
                                          losetup /dev/loop5 /var/lib/ceph-osd-2.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:18:13 compute-0 kernel: loop5: detected capacity change from 0 to 41943040
Nov 24 18:18:13 compute-0 sudo[72824]: pam_unix(sudo:session): session closed for user root
Nov 24 18:18:13 compute-0 sudo[72856]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrxjlvnuhkpmacsljbfjdcunwmzqlldx ; /usr/bin/python3'
Nov 24 18:18:13 compute-0 sudo[72856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:18:13 compute-0 python3[72858]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5
                                          vgcreate ceph_vg2 /dev/loop5
                                          lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:18:13 compute-0 lvm[72861]: PV /dev/loop5 not used.
Nov 24 18:18:14 compute-0 lvm[72870]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 24 18:18:14 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Nov 24 18:18:14 compute-0 sudo[72856]: pam_unix(sudo:session): session closed for user root
Nov 24 18:18:14 compute-0 lvm[72872]:   1 logical volume(s) in volume group "ceph_vg2" now active
Nov 24 18:18:14 compute-0 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Nov 24 18:18:14 compute-0 sudo[72948]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cswzsxsuekoozuclnoqpcfsxydwigyjo ; /usr/bin/python3'
Nov 24 18:18:14 compute-0 sudo[72948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:18:14 compute-0 python3[72950]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 18:18:14 compute-0 sudo[72948]: pam_unix(sudo:session): session closed for user root
Nov 24 18:18:14 compute-0 sudo[73021]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raykmptptuvvjyghidiuhgiyiafzwbbs ; /usr/bin/python3'
Nov 24 18:18:14 compute-0 sudo[73021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:18:15 compute-0 python3[73023]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764008294.2611268-36470-204393291903758/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:18:15 compute-0 sudo[73021]: pam_unix(sudo:session): session closed for user root
Nov 24 18:18:15 compute-0 sudo[73071]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swxvpdcrlddbpcklanwvduoxgojbbgom ; /usr/bin/python3'
Nov 24 18:18:15 compute-0 sudo[73071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:18:15 compute-0 python3[73073]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:18:15 compute-0 systemd[1]: Reloading.
Nov 24 18:18:15 compute-0 systemd-rc-local-generator[73105]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:18:15 compute-0 systemd-sysv-generator[73109]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:18:15 compute-0 systemd[1]: Starting Ceph OSD losetup...
Nov 24 18:18:15 compute-0 bash[73114]: /dev/loop5: [64513]:4328010 (/var/lib/ceph-osd-2.img)
Nov 24 18:18:15 compute-0 systemd[1]: Finished Ceph OSD losetup.
Nov 24 18:18:15 compute-0 sudo[73071]: pam_unix(sudo:session): session closed for user root
Nov 24 18:18:15 compute-0 lvm[73115]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 24 18:18:15 compute-0 lvm[73115]: VG ceph_vg2 finished
Nov 24 18:18:17 compute-0 python3[73139]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:18:18 compute-0 chronyd[58415]: Selected source 167.160.187.12 (pool.ntp.org)
Nov 24 18:18:19 compute-0 sudo[73230]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjxlgvrgrywfnfalhqtwfsmazxwtssfb ; /usr/bin/python3'
Nov 24 18:18:19 compute-0 sudo[73230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:18:20 compute-0 python3[73232]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 24 18:18:21 compute-0 groupadd[73238]: group added to /etc/group: name=cephadm, GID=992
Nov 24 18:18:21 compute-0 groupadd[73238]: group added to /etc/gshadow: name=cephadm
Nov 24 18:18:21 compute-0 groupadd[73238]: new group: name=cephadm, GID=992
Nov 24 18:18:21 compute-0 useradd[73245]: new user: name=cephadm, UID=992, GID=992, home=/var/lib/cephadm, shell=/bin/bash, from=none
Nov 24 18:18:21 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 18:18:21 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 24 18:18:21 compute-0 sudo[73230]: pam_unix(sudo:session): session closed for user root
Nov 24 18:18:22 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 18:18:22 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 24 18:18:22 compute-0 systemd[1]: run-rda00198448e048848df8d8060e2a43ed.service: Deactivated successfully.
Nov 24 18:18:22 compute-0 sudo[73340]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gycinktyzdhigdjficodonljniqnqxlv ; /usr/bin/python3'
Nov 24 18:18:22 compute-0 sudo[73340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:18:22 compute-0 python3[73343]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 24 18:18:22 compute-0 sudo[73340]: pam_unix(sudo:session): session closed for user root
Nov 24 18:18:22 compute-0 sudo[73369]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzyehlgcovbdrefbguivdoxqkxnjnhpv ; /usr/bin/python3'
Nov 24 18:18:22 compute-0 sudo[73369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:18:22 compute-0 python3[73371]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:18:22 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 18:18:22 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 18:18:22 compute-0 sudo[73369]: pam_unix(sudo:session): session closed for user root
Nov 24 18:18:23 compute-0 sudo[73432]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipucsjegycsmzlvzhkhxwpdzswvhbwbq ; /usr/bin/python3'
Nov 24 18:18:23 compute-0 sudo[73432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:18:23 compute-0 python3[73434]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:18:23 compute-0 sudo[73432]: pam_unix(sudo:session): session closed for user root
Nov 24 18:18:23 compute-0 sudo[73458]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvfbygqlxmecinporutssrayvugofpaz ; /usr/bin/python3'
Nov 24 18:18:23 compute-0 sudo[73458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:18:23 compute-0 python3[73460]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:18:23 compute-0 sudo[73458]: pam_unix(sudo:session): session closed for user root
Nov 24 18:18:23 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 18:18:24 compute-0 sudo[73536]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdlvkbcjfpmrabtiollmzpdmpobcdevu ; /usr/bin/python3'
Nov 24 18:18:24 compute-0 sudo[73536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:18:24 compute-0 python3[73538]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 18:18:24 compute-0 sudo[73536]: pam_unix(sudo:session): session closed for user root
Nov 24 18:18:24 compute-0 sudo[73609]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wphzzvrjdnepqsscixcnnhcykfzssdut ; /usr/bin/python3'
Nov 24 18:18:24 compute-0 sudo[73609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:18:24 compute-0 python3[73611]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764008304.0859396-36622-104594281702851/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:18:24 compute-0 sudo[73609]: pam_unix(sudo:session): session closed for user root
Nov 24 18:18:25 compute-0 sudo[73711]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvvebhxwewgmitkyanrhrwtugmaqyxbm ; /usr/bin/python3'
Nov 24 18:18:25 compute-0 sudo[73711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:18:25 compute-0 python3[73713]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 18:18:25 compute-0 sudo[73711]: pam_unix(sudo:session): session closed for user root
Nov 24 18:18:25 compute-0 sudo[73784]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggtbfugqesajjxevuwcsnkswjwikqnxz ; /usr/bin/python3'
Nov 24 18:18:25 compute-0 sudo[73784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:18:25 compute-0 python3[73786]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764008305.2705932-36640-269736424405030/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:18:25 compute-0 sudo[73784]: pam_unix(sudo:session): session closed for user root
Nov 24 18:18:26 compute-0 sudo[73834]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwrgciylmpwapriddbjhuydudfnpqccs ; /usr/bin/python3'
Nov 24 18:18:26 compute-0 sudo[73834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:18:26 compute-0 python3[73836]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 24 18:18:26 compute-0 sudo[73834]: pam_unix(sudo:session): session closed for user root
Nov 24 18:18:26 compute-0 sudo[73862]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjbexxoslpkuxwylczvxqlqmhktixhkp ; /usr/bin/python3'
Nov 24 18:18:26 compute-0 sudo[73862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:18:26 compute-0 python3[73864]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 24 18:18:26 compute-0 sudo[73862]: pam_unix(sudo:session): session closed for user root
Nov 24 18:18:26 compute-0 sudo[73890]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvgjnrkltsjpryeonimepmaeragatmmd ; /usr/bin/python3'
Nov 24 18:18:26 compute-0 sudo[73890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:18:26 compute-0 python3[73892]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 24 18:18:27 compute-0 sudo[73890]: pam_unix(sudo:session): session closed for user root
Nov 24 18:18:27 compute-0 sudo[73918]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzvcbtprsyhawdavyxqsucvfreevvlvl ; /usr/bin/python3'
Nov 24 18:18:27 compute-0 sudo[73918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:18:27 compute-0 python3[73920]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:18:27 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 18:18:27 compute-0 sshd-session[73937]: Accepted publickey for ceph-admin from 192.168.122.100 port 43916 ssh2: RSA SHA256:UuqXPbG/GIyS8E+86W5iuX+TvUSxrYgcGPs97QlP9LE
Nov 24 18:18:27 compute-0 systemd-logind[822]: New session 18 of user ceph-admin.
Nov 24 18:18:27 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Nov 24 18:18:27 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 24 18:18:27 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 24 18:18:27 compute-0 systemd[1]: Starting User Manager for UID 42477...
Nov 24 18:18:27 compute-0 systemd[73941]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 18:18:27 compute-0 systemd[73941]: Queued start job for default target Main User Target.
Nov 24 18:18:27 compute-0 systemd[73941]: Created slice User Application Slice.
Nov 24 18:18:27 compute-0 systemd[73941]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 24 18:18:27 compute-0 systemd[73941]: Started Daily Cleanup of User's Temporary Directories.
Nov 24 18:18:27 compute-0 systemd[73941]: Reached target Paths.
Nov 24 18:18:27 compute-0 systemd[73941]: Reached target Timers.
Nov 24 18:18:27 compute-0 systemd[73941]: Starting D-Bus User Message Bus Socket...
Nov 24 18:18:27 compute-0 systemd[73941]: Starting Create User's Volatile Files and Directories...
Nov 24 18:18:27 compute-0 systemd[73941]: Listening on D-Bus User Message Bus Socket.
Nov 24 18:18:27 compute-0 systemd[73941]: Reached target Sockets.
Nov 24 18:18:27 compute-0 systemd[73941]: Finished Create User's Volatile Files and Directories.
Nov 24 18:18:27 compute-0 systemd[73941]: Reached target Basic System.
Nov 24 18:18:27 compute-0 systemd[73941]: Reached target Main User Target.
Nov 24 18:18:27 compute-0 systemd[73941]: Startup finished in 135ms.
Nov 24 18:18:27 compute-0 systemd[1]: Started User Manager for UID 42477.
Nov 24 18:18:27 compute-0 systemd[1]: Started Session 18 of User ceph-admin.
Nov 24 18:18:27 compute-0 sshd-session[73937]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 18:18:27 compute-0 sudo[73957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Nov 24 18:18:27 compute-0 sudo[73957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:18:27 compute-0 sudo[73957]: pam_unix(sudo:session): session closed for user root
Nov 24 18:18:27 compute-0 sshd-session[73956]: Received disconnect from 192.168.122.100 port 43916:11: disconnected by user
Nov 24 18:18:27 compute-0 sshd-session[73956]: Disconnected from user ceph-admin 192.168.122.100 port 43916
Nov 24 18:18:27 compute-0 sshd-session[73937]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 24 18:18:27 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Nov 24 18:18:27 compute-0 systemd-logind[822]: Session 18 logged out. Waiting for processes to exit.
Nov 24 18:18:27 compute-0 systemd-logind[822]: Removed session 18.
Nov 24 18:18:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat381069562-lower\x2dmapped.mount: Deactivated successfully.
Nov 24 18:18:38 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Nov 24 18:18:38 compute-0 systemd[73941]: Activating special unit Exit the Session...
Nov 24 18:18:38 compute-0 systemd[73941]: Stopped target Main User Target.
Nov 24 18:18:38 compute-0 systemd[73941]: Stopped target Basic System.
Nov 24 18:18:38 compute-0 systemd[73941]: Stopped target Paths.
Nov 24 18:18:38 compute-0 systemd[73941]: Stopped target Sockets.
Nov 24 18:18:38 compute-0 systemd[73941]: Stopped target Timers.
Nov 24 18:18:38 compute-0 systemd[73941]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 24 18:18:38 compute-0 systemd[73941]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 24 18:18:38 compute-0 systemd[73941]: Closed D-Bus User Message Bus Socket.
Nov 24 18:18:38 compute-0 systemd[73941]: Stopped Create User's Volatile Files and Directories.
Nov 24 18:18:38 compute-0 systemd[73941]: Removed slice User Application Slice.
Nov 24 18:18:38 compute-0 systemd[73941]: Reached target Shutdown.
Nov 24 18:18:38 compute-0 systemd[73941]: Finished Exit the Session.
Nov 24 18:18:38 compute-0 systemd[73941]: Reached target Exit the Session.
Nov 24 18:18:38 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Nov 24 18:18:38 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Nov 24 18:18:38 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Nov 24 18:18:38 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Nov 24 18:18:38 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Nov 24 18:18:38 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Nov 24 18:18:38 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Nov 24 18:18:41 compute-0 podman[73994]: 2025-11-24 18:18:41.415429495 +0000 UTC m=+13.422087290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:18:41 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 18:18:41 compute-0 podman[74052]: 2025-11-24 18:18:41.491738028 +0000 UTC m=+0.047967521 container create bc3a7eb7c0e25c38492abb20e588db208319a4f338ecf0431cb4270704a6ed2e (image=quay.io/ceph/ceph:v18, name=peaceful_mcnulty, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:18:41 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 24 18:18:41 compute-0 systemd[1]: Started libpod-conmon-bc3a7eb7c0e25c38492abb20e588db208319a4f338ecf0431cb4270704a6ed2e.scope.
Nov 24 18:18:41 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:18:41 compute-0 podman[74052]: 2025-11-24 18:18:41.470575603 +0000 UTC m=+0.026805146 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:18:41 compute-0 podman[74052]: 2025-11-24 18:18:41.599383077 +0000 UTC m=+0.155612610 container init bc3a7eb7c0e25c38492abb20e588db208319a4f338ecf0431cb4270704a6ed2e (image=quay.io/ceph/ceph:v18, name=peaceful_mcnulty, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Nov 24 18:18:41 compute-0 podman[74052]: 2025-11-24 18:18:41.606797381 +0000 UTC m=+0.163026864 container start bc3a7eb7c0e25c38492abb20e588db208319a4f338ecf0431cb4270704a6ed2e (image=quay.io/ceph/ceph:v18, name=peaceful_mcnulty, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:18:41 compute-0 podman[74052]: 2025-11-24 18:18:41.610233947 +0000 UTC m=+0.166463480 container attach bc3a7eb7c0e25c38492abb20e588db208319a4f338ecf0431cb4270704a6ed2e (image=quay.io/ceph/ceph:v18, name=peaceful_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 18:18:41 compute-0 peaceful_mcnulty[74068]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 24 18:18:41 compute-0 systemd[1]: libpod-bc3a7eb7c0e25c38492abb20e588db208319a4f338ecf0431cb4270704a6ed2e.scope: Deactivated successfully.
Nov 24 18:18:41 compute-0 podman[74052]: 2025-11-24 18:18:41.913032607 +0000 UTC m=+0.469262090 container died bc3a7eb7c0e25c38492abb20e588db208319a4f338ecf0431cb4270704a6ed2e (image=quay.io/ceph/ceph:v18, name=peaceful_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 24 18:18:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-fab0c205caeb711a1e1dbe14e825a6704978e2dd659e25353db701a9c9b208df-merged.mount: Deactivated successfully.
Nov 24 18:18:41 compute-0 podman[74052]: 2025-11-24 18:18:41.958285859 +0000 UTC m=+0.514515342 container remove bc3a7eb7c0e25c38492abb20e588db208319a4f338ecf0431cb4270704a6ed2e (image=quay.io/ceph/ceph:v18, name=peaceful_mcnulty, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 18:18:41 compute-0 systemd[1]: libpod-conmon-bc3a7eb7c0e25c38492abb20e588db208319a4f338ecf0431cb4270704a6ed2e.scope: Deactivated successfully.
Nov 24 18:18:42 compute-0 podman[74086]: 2025-11-24 18:18:42.012380651 +0000 UTC m=+0.035871171 container create 8e0310c76eb66d7a3438ef85b6cb93f8342ab0ff22b0afe1b93a93876bbf0dfe (image=quay.io/ceph/ceph:v18, name=xenodochial_bose, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 18:18:42 compute-0 systemd[1]: Started libpod-conmon-8e0310c76eb66d7a3438ef85b6cb93f8342ab0ff22b0afe1b93a93876bbf0dfe.scope.
Nov 24 18:18:42 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:18:42 compute-0 podman[74086]: 2025-11-24 18:18:42.087247398 +0000 UTC m=+0.110737938 container init 8e0310c76eb66d7a3438ef85b6cb93f8342ab0ff22b0afe1b93a93876bbf0dfe (image=quay.io/ceph/ceph:v18, name=xenodochial_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:18:42 compute-0 podman[74086]: 2025-11-24 18:18:41.996987939 +0000 UTC m=+0.020478479 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:18:42 compute-0 podman[74086]: 2025-11-24 18:18:42.094922438 +0000 UTC m=+0.118412958 container start 8e0310c76eb66d7a3438ef85b6cb93f8342ab0ff22b0afe1b93a93876bbf0dfe (image=quay.io/ceph/ceph:v18, name=xenodochial_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 24 18:18:42 compute-0 podman[74086]: 2025-11-24 18:18:42.097651536 +0000 UTC m=+0.121142056 container attach 8e0310c76eb66d7a3438ef85b6cb93f8342ab0ff22b0afe1b93a93876bbf0dfe (image=quay.io/ceph/ceph:v18, name=xenodochial_bose, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 24 18:18:42 compute-0 xenodochial_bose[74103]: 167 167
Nov 24 18:18:42 compute-0 systemd[1]: libpod-8e0310c76eb66d7a3438ef85b6cb93f8342ab0ff22b0afe1b93a93876bbf0dfe.scope: Deactivated successfully.
Nov 24 18:18:42 compute-0 podman[74086]: 2025-11-24 18:18:42.100429115 +0000 UTC m=+0.123919665 container died 8e0310c76eb66d7a3438ef85b6cb93f8342ab0ff22b0afe1b93a93876bbf0dfe (image=quay.io/ceph/ceph:v18, name=xenodochial_bose, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 18:18:42 compute-0 podman[74086]: 2025-11-24 18:18:42.137042283 +0000 UTC m=+0.160532803 container remove 8e0310c76eb66d7a3438ef85b6cb93f8342ab0ff22b0afe1b93a93876bbf0dfe (image=quay.io/ceph/ceph:v18, name=xenodochial_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:18:42 compute-0 systemd[1]: libpod-conmon-8e0310c76eb66d7a3438ef85b6cb93f8342ab0ff22b0afe1b93a93876bbf0dfe.scope: Deactivated successfully.
Nov 24 18:18:42 compute-0 podman[74120]: 2025-11-24 18:18:42.20344732 +0000 UTC m=+0.042106036 container create ecfec0ec0577291b15ab8ec15ee67927fa5cde0f15ff15e6a54301c23042373a (image=quay.io/ceph/ceph:v18, name=distracted_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 18:18:42 compute-0 systemd[1]: Started libpod-conmon-ecfec0ec0577291b15ab8ec15ee67927fa5cde0f15ff15e6a54301c23042373a.scope.
Nov 24 18:18:42 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:18:42 compute-0 podman[74120]: 2025-11-24 18:18:42.184991842 +0000 UTC m=+0.023650578 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:18:42 compute-0 podman[74120]: 2025-11-24 18:18:42.286688994 +0000 UTC m=+0.125347780 container init ecfec0ec0577291b15ab8ec15ee67927fa5cde0f15ff15e6a54301c23042373a (image=quay.io/ceph/ceph:v18, name=distracted_buck, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:18:42 compute-0 podman[74120]: 2025-11-24 18:18:42.29296974 +0000 UTC m=+0.131628486 container start ecfec0ec0577291b15ab8ec15ee67927fa5cde0f15ff15e6a54301c23042373a (image=quay.io/ceph/ceph:v18, name=distracted_buck, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 24 18:18:42 compute-0 podman[74120]: 2025-11-24 18:18:42.297338689 +0000 UTC m=+0.135997505 container attach ecfec0ec0577291b15ab8ec15ee67927fa5cde0f15ff15e6a54301c23042373a (image=quay.io/ceph/ceph:v18, name=distracted_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 18:18:42 compute-0 distracted_buck[74136]: AQCCoSRpI0CEEhAAh1AsvDj3Tier0fuH8CTJ+A==
Nov 24 18:18:42 compute-0 systemd[1]: libpod-ecfec0ec0577291b15ab8ec15ee67927fa5cde0f15ff15e6a54301c23042373a.scope: Deactivated successfully.
Nov 24 18:18:42 compute-0 podman[74120]: 2025-11-24 18:18:42.314560396 +0000 UTC m=+0.153219142 container died ecfec0ec0577291b15ab8ec15ee67927fa5cde0f15ff15e6a54301c23042373a (image=quay.io/ceph/ceph:v18, name=distracted_buck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:18:42 compute-0 podman[74120]: 2025-11-24 18:18:42.354611109 +0000 UTC m=+0.193269825 container remove ecfec0ec0577291b15ab8ec15ee67927fa5cde0f15ff15e6a54301c23042373a (image=quay.io/ceph/ceph:v18, name=distracted_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:18:42 compute-0 systemd[1]: libpod-conmon-ecfec0ec0577291b15ab8ec15ee67927fa5cde0f15ff15e6a54301c23042373a.scope: Deactivated successfully.
Nov 24 18:18:42 compute-0 podman[74154]: 2025-11-24 18:18:42.416075414 +0000 UTC m=+0.041086890 container create 9463b69d928537a4d3154c60f8db51c36c31ee558701a97aedd431a412b6689e (image=quay.io/ceph/ceph:v18, name=festive_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:18:42 compute-0 systemd[1]: Started libpod-conmon-9463b69d928537a4d3154c60f8db51c36c31ee558701a97aedd431a412b6689e.scope.
Nov 24 18:18:42 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:18:42 compute-0 podman[74154]: 2025-11-24 18:18:42.476934513 +0000 UTC m=+0.101946009 container init 9463b69d928537a4d3154c60f8db51c36c31ee558701a97aedd431a412b6689e (image=quay.io/ceph/ceph:v18, name=festive_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 24 18:18:42 compute-0 podman[74154]: 2025-11-24 18:18:42.481483006 +0000 UTC m=+0.106494482 container start 9463b69d928537a4d3154c60f8db51c36c31ee558701a97aedd431a412b6689e (image=quay.io/ceph/ceph:v18, name=festive_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:18:42 compute-0 podman[74154]: 2025-11-24 18:18:42.48446348 +0000 UTC m=+0.109474956 container attach 9463b69d928537a4d3154c60f8db51c36c31ee558701a97aedd431a412b6689e (image=quay.io/ceph/ceph:v18, name=festive_bhabha, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 18:18:42 compute-0 podman[74154]: 2025-11-24 18:18:42.396068507 +0000 UTC m=+0.021080043 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:18:42 compute-0 festive_bhabha[74169]: AQCCoSRpe+LQHRAABOTr2MY5IzGxlo1L2CWrEw==
Nov 24 18:18:42 compute-0 systemd[1]: libpod-9463b69d928537a4d3154c60f8db51c36c31ee558701a97aedd431a412b6689e.scope: Deactivated successfully.
Nov 24 18:18:42 compute-0 podman[74154]: 2025-11-24 18:18:42.503563933 +0000 UTC m=+0.128575429 container died 9463b69d928537a4d3154c60f8db51c36c31ee558701a97aedd431a412b6689e (image=quay.io/ceph/ceph:v18, name=festive_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 24 18:18:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-1bf4a24ee8880e81dd511bccbe4675ee5fb29b9f69803619a61a77033cb22ace-merged.mount: Deactivated successfully.
Nov 24 18:18:42 compute-0 podman[74154]: 2025-11-24 18:18:42.536543841 +0000 UTC m=+0.161555317 container remove 9463b69d928537a4d3154c60f8db51c36c31ee558701a97aedd431a412b6689e (image=quay.io/ceph/ceph:v18, name=festive_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 24 18:18:42 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 18:18:42 compute-0 systemd[1]: libpod-conmon-9463b69d928537a4d3154c60f8db51c36c31ee558701a97aedd431a412b6689e.scope: Deactivated successfully.
Nov 24 18:18:42 compute-0 podman[74188]: 2025-11-24 18:18:42.590307215 +0000 UTC m=+0.034522217 container create 21fd72d6bb588a6c7c099f1ce28ee58b807adda6fd3fc4efaa702a9e4a0b0f08 (image=quay.io/ceph/ceph:v18, name=optimistic_cray, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:18:42 compute-0 systemd[1]: Started libpod-conmon-21fd72d6bb588a6c7c099f1ce28ee58b807adda6fd3fc4efaa702a9e4a0b0f08.scope.
Nov 24 18:18:42 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:18:42 compute-0 podman[74188]: 2025-11-24 18:18:42.57517663 +0000 UTC m=+0.019391652 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:18:42 compute-0 podman[74188]: 2025-11-24 18:18:42.841166967 +0000 UTC m=+0.285381989 container init 21fd72d6bb588a6c7c099f1ce28ee58b807adda6fd3fc4efaa702a9e4a0b0f08 (image=quay.io/ceph/ceph:v18, name=optimistic_cray, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:18:42 compute-0 podman[74188]: 2025-11-24 18:18:42.846037908 +0000 UTC m=+0.290252910 container start 21fd72d6bb588a6c7c099f1ce28ee58b807adda6fd3fc4efaa702a9e4a0b0f08 (image=quay.io/ceph/ceph:v18, name=optimistic_cray, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:18:42 compute-0 podman[74188]: 2025-11-24 18:18:42.849221457 +0000 UTC m=+0.293436529 container attach 21fd72d6bb588a6c7c099f1ce28ee58b807adda6fd3fc4efaa702a9e4a0b0f08 (image=quay.io/ceph/ceph:v18, name=optimistic_cray, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 18:18:42 compute-0 optimistic_cray[74204]: AQCCoSRpGfBtMxAAbOkOo3GiHZcm0uhcwe9P5g==
Nov 24 18:18:42 compute-0 systemd[1]: libpod-21fd72d6bb588a6c7c099f1ce28ee58b807adda6fd3fc4efaa702a9e4a0b0f08.scope: Deactivated successfully.
Nov 24 18:18:42 compute-0 podman[74188]: 2025-11-24 18:18:42.86588314 +0000 UTC m=+0.310098142 container died 21fd72d6bb588a6c7c099f1ce28ee58b807adda6fd3fc4efaa702a9e4a0b0f08 (image=quay.io/ceph/ceph:v18, name=optimistic_cray, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:18:42 compute-0 podman[74188]: 2025-11-24 18:18:42.894552191 +0000 UTC m=+0.338767213 container remove 21fd72d6bb588a6c7c099f1ce28ee58b807adda6fd3fc4efaa702a9e4a0b0f08 (image=quay.io/ceph/ceph:v18, name=optimistic_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 24 18:18:42 compute-0 systemd[1]: libpod-conmon-21fd72d6bb588a6c7c099f1ce28ee58b807adda6fd3fc4efaa702a9e4a0b0f08.scope: Deactivated successfully.
Nov 24 18:18:42 compute-0 podman[74222]: 2025-11-24 18:18:42.961644425 +0000 UTC m=+0.044327680 container create 5a08f657c75c42814c29ea1eb0adaef9c9232a3b4ae38f721ed1ceecc2ed0a8c (image=quay.io/ceph/ceph:v18, name=suspicious_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 18:18:42 compute-0 systemd[1]: Started libpod-conmon-5a08f657c75c42814c29ea1eb0adaef9c9232a3b4ae38f721ed1ceecc2ed0a8c.scope.
Nov 24 18:18:43 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:18:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/306b1de28570fa31e9274d5605e7110e2473cfc3b86d3bee9b8a033b1942d15e/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:43 compute-0 podman[74222]: 2025-11-24 18:18:43.028506833 +0000 UTC m=+0.111190178 container init 5a08f657c75c42814c29ea1eb0adaef9c9232a3b4ae38f721ed1ceecc2ed0a8c (image=quay.io/ceph/ceph:v18, name=suspicious_torvalds, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 18:18:43 compute-0 podman[74222]: 2025-11-24 18:18:43.032873592 +0000 UTC m=+0.115556847 container start 5a08f657c75c42814c29ea1eb0adaef9c9232a3b4ae38f721ed1ceecc2ed0a8c (image=quay.io/ceph/ceph:v18, name=suspicious_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 18:18:43 compute-0 podman[74222]: 2025-11-24 18:18:43.035829785 +0000 UTC m=+0.118513140 container attach 5a08f657c75c42814c29ea1eb0adaef9c9232a3b4ae38f721ed1ceecc2ed0a8c (image=quay.io/ceph/ceph:v18, name=suspicious_torvalds, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 24 18:18:43 compute-0 podman[74222]: 2025-11-24 18:18:42.942670044 +0000 UTC m=+0.025353349 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:18:43 compute-0 suspicious_torvalds[74239]: /usr/bin/monmaptool: monmap file /tmp/monmap
Nov 24 18:18:43 compute-0 suspicious_torvalds[74239]: setting min_mon_release = pacific
Nov 24 18:18:43 compute-0 suspicious_torvalds[74239]: /usr/bin/monmaptool: set fsid to e5ee928f-099b-569b-93c9-ecf025cbb50d
Nov 24 18:18:43 compute-0 suspicious_torvalds[74239]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Nov 24 18:18:43 compute-0 systemd[1]: libpod-5a08f657c75c42814c29ea1eb0adaef9c9232a3b4ae38f721ed1ceecc2ed0a8c.scope: Deactivated successfully.
Nov 24 18:18:43 compute-0 podman[74222]: 2025-11-24 18:18:43.061460881 +0000 UTC m=+0.144144196 container died 5a08f657c75c42814c29ea1eb0adaef9c9232a3b4ae38f721ed1ceecc2ed0a8c (image=quay.io/ceph/ceph:v18, name=suspicious_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 24 18:18:43 compute-0 podman[74222]: 2025-11-24 18:18:43.099009502 +0000 UTC m=+0.181692797 container remove 5a08f657c75c42814c29ea1eb0adaef9c9232a3b4ae38f721ed1ceecc2ed0a8c (image=quay.io/ceph/ceph:v18, name=suspicious_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:18:43 compute-0 systemd[1]: libpod-conmon-5a08f657c75c42814c29ea1eb0adaef9c9232a3b4ae38f721ed1ceecc2ed0a8c.scope: Deactivated successfully.
Nov 24 18:18:43 compute-0 podman[74260]: 2025-11-24 18:18:43.160094957 +0000 UTC m=+0.041712115 container create 96974f0d4009e7bb5c463af2c3243d7f4f68c08808d6d55e28b283afe26b17b3 (image=quay.io/ceph/ceph:v18, name=mystifying_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 24 18:18:43 compute-0 systemd[1]: Started libpod-conmon-96974f0d4009e7bb5c463af2c3243d7f4f68c08808d6d55e28b283afe26b17b3.scope.
Nov 24 18:18:43 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:18:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dfb2e3728ab9376905b152fe8f978b06a92dd32436562e35efa6e2cb7d3a901/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dfb2e3728ab9376905b152fe8f978b06a92dd32436562e35efa6e2cb7d3a901/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dfb2e3728ab9376905b152fe8f978b06a92dd32436562e35efa6e2cb7d3a901/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dfb2e3728ab9376905b152fe8f978b06a92dd32436562e35efa6e2cb7d3a901/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:43 compute-0 podman[74260]: 2025-11-24 18:18:43.217819609 +0000 UTC m=+0.099436787 container init 96974f0d4009e7bb5c463af2c3243d7f4f68c08808d6d55e28b283afe26b17b3 (image=quay.io/ceph/ceph:v18, name=mystifying_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Nov 24 18:18:43 compute-0 podman[74260]: 2025-11-24 18:18:43.225015967 +0000 UTC m=+0.106633115 container start 96974f0d4009e7bb5c463af2c3243d7f4f68c08808d6d55e28b283afe26b17b3 (image=quay.io/ceph/ceph:v18, name=mystifying_chebyshev, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:18:43 compute-0 podman[74260]: 2025-11-24 18:18:43.228499434 +0000 UTC m=+0.110116602 container attach 96974f0d4009e7bb5c463af2c3243d7f4f68c08808d6d55e28b283afe26b17b3 (image=quay.io/ceph/ceph:v18, name=mystifying_chebyshev, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 24 18:18:43 compute-0 podman[74260]: 2025-11-24 18:18:43.143385453 +0000 UTC m=+0.025002631 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:18:43 compute-0 systemd[1]: libpod-96974f0d4009e7bb5c463af2c3243d7f4f68c08808d6d55e28b283afe26b17b3.scope: Deactivated successfully.
Nov 24 18:18:43 compute-0 podman[74260]: 2025-11-24 18:18:43.320055155 +0000 UTC m=+0.201672303 container died 96974f0d4009e7bb5c463af2c3243d7f4f68c08808d6d55e28b283afe26b17b3 (image=quay.io/ceph/ceph:v18, name=mystifying_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 18:18:43 compute-0 podman[74260]: 2025-11-24 18:18:43.356030157 +0000 UTC m=+0.237647315 container remove 96974f0d4009e7bb5c463af2c3243d7f4f68c08808d6d55e28b283afe26b17b3 (image=quay.io/ceph/ceph:v18, name=mystifying_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:18:43 compute-0 systemd[1]: libpod-conmon-96974f0d4009e7bb5c463af2c3243d7f4f68c08808d6d55e28b283afe26b17b3.scope: Deactivated successfully.
Nov 24 18:18:43 compute-0 systemd[1]: Reloading.
Nov 24 18:18:43 compute-0 systemd-sysv-generator[74345]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:18:43 compute-0 systemd-rc-local-generator[74341]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:18:43 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 18:18:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a5fbd4a643ec84a1b384ecb85e67ff50286b8c6d3304293776a1918d8cdeba3-merged.mount: Deactivated successfully.
Nov 24 18:18:43 compute-0 systemd[1]: Reloading.
Nov 24 18:18:43 compute-0 systemd-rc-local-generator[74378]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:18:43 compute-0 systemd-sysv-generator[74381]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:18:43 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Nov 24 18:18:43 compute-0 systemd[1]: Reloading.
Nov 24 18:18:43 compute-0 systemd-rc-local-generator[74417]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:18:43 compute-0 systemd-sysv-generator[74422]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:18:44 compute-0 systemd[1]: Reached target Ceph cluster e5ee928f-099b-569b-93c9-ecf025cbb50d.
Nov 24 18:18:44 compute-0 systemd[1]: Reloading.
Nov 24 18:18:44 compute-0 systemd-rc-local-generator[74453]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:18:44 compute-0 systemd-sysv-generator[74457]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:18:44 compute-0 systemd[1]: Reloading.
Nov 24 18:18:44 compute-0 systemd-sysv-generator[74498]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:18:44 compute-0 systemd-rc-local-generator[74494]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:18:44 compute-0 systemd[1]: Created slice Slice /system/ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d.
Nov 24 18:18:44 compute-0 systemd[1]: Reached target System Time Set.
Nov 24 18:18:44 compute-0 systemd[1]: Reached target System Time Synchronized.
Nov 24 18:18:44 compute-0 systemd[1]: Starting Ceph mon.compute-0 for e5ee928f-099b-569b-93c9-ecf025cbb50d...
Nov 24 18:18:44 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 18:18:44 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 18:18:44 compute-0 podman[74552]: 2025-11-24 18:18:44.825776479 +0000 UTC m=+0.034930717 container create 5efd0838c252a6726be084a5a3e77f2b53c37f06f40551e6853ca14688755acf (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:18:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc8a18acab957d2d39d4f0ba9f95371e45b097f9aedc3914bbea3902be2f8e52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc8a18acab957d2d39d4f0ba9f95371e45b097f9aedc3914bbea3902be2f8e52/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc8a18acab957d2d39d4f0ba9f95371e45b097f9aedc3914bbea3902be2f8e52/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc8a18acab957d2d39d4f0ba9f95371e45b097f9aedc3914bbea3902be2f8e52/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:44 compute-0 podman[74552]: 2025-11-24 18:18:44.882482976 +0000 UTC m=+0.091637224 container init 5efd0838c252a6726be084a5a3e77f2b53c37f06f40551e6853ca14688755acf (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:18:44 compute-0 podman[74552]: 2025-11-24 18:18:44.888012713 +0000 UTC m=+0.097166971 container start 5efd0838c252a6726be084a5a3e77f2b53c37f06f40551e6853ca14688755acf (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:18:44 compute-0 bash[74552]: 5efd0838c252a6726be084a5a3e77f2b53c37f06f40551e6853ca14688755acf
Nov 24 18:18:44 compute-0 podman[74552]: 2025-11-24 18:18:44.810694495 +0000 UTC m=+0.019848783 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:18:44 compute-0 systemd[1]: Started Ceph mon.compute-0 for e5ee928f-099b-569b-93c9-ecf025cbb50d.
Nov 24 18:18:44 compute-0 ceph-mon[74572]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 18:18:44 compute-0 ceph-mon[74572]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 24 18:18:44 compute-0 ceph-mon[74572]: pidfile_write: ignore empty --pid-file
Nov 24 18:18:44 compute-0 ceph-mon[74572]: load: jerasure load: lrc 
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: RocksDB version: 7.9.2
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: Git sha 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: DB SUMMARY
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: DB Session ID:  ABEEGKT7BPHIYELDG0VH
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: CURRENT file:  CURRENT
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: IDENTITY file:  IDENTITY
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                         Options.error_if_exists: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                       Options.create_if_missing: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                         Options.paranoid_checks: 1
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                                     Options.env: 0x55a9e4d0fc40
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                                Options.info_log: 0x55a9e6d80e80
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                Options.max_file_opening_threads: 16
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                              Options.statistics: (nil)
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                               Options.use_fsync: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                       Options.max_log_file_size: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                         Options.allow_fallocate: 1
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                        Options.use_direct_reads: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:          Options.create_missing_column_families: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                              Options.db_log_dir: 
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                                 Options.wal_dir: 
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                   Options.advise_random_on_open: 1
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                    Options.write_buffer_manager: 0x55a9e6d90b40
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                            Options.rate_limiter: (nil)
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                  Options.unordered_write: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                               Options.row_cache: None
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                              Options.wal_filter: None
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:             Options.allow_ingest_behind: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:             Options.two_write_queues: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:             Options.manual_wal_flush: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:             Options.wal_compression: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:             Options.atomic_flush: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                 Options.log_readahead_size: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:             Options.allow_data_in_errors: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:             Options.db_host_id: __hostname__
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:             Options.max_background_jobs: 2
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:             Options.max_background_compactions: -1
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:             Options.max_subcompactions: 1
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:             Options.max_total_wal_size: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                          Options.max_open_files: -1
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                          Options.bytes_per_sync: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:       Options.compaction_readahead_size: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                  Options.max_background_flushes: -1
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: Compression algorithms supported:
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:         kZSTD supported: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:         kXpressCompression supported: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:         kBZip2Compression supported: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:         kLZ4Compression supported: 1
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:         kZlibCompression supported: 1
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:         kLZ4HCCompression supported: 1
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:         kSnappyCompression supported: 1
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:           Options.merge_operator: 
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:        Options.compaction_filter: None
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a9e6d80a80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a9e6d791f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:        Options.write_buffer_size: 33554432
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:  Options.max_write_buffer_number: 2
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:          Options.compression: NoCompression
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:             Options.num_levels: 7
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5bcbf129-cc59-4441-a37f-051fd374ef44
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008324943082, "job": 1, "event": "recovery_started", "wal_files": [4]}
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008324944883, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008324, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "ABEEGKT7BPHIYELDG0VH", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008324945080, "job": 1, "event": "recovery_finished"}
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55a9e6da2e00
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: DB pointer 0x55a9e6eac000
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 18:18:44 compute-0 ceph-mon[74572]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.18 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.18 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a9e6d791f0#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,0.95 KB,0.000181794%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 24 18:18:44 compute-0 ceph-mon[74572]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid e5ee928f-099b-569b-93c9-ecf025cbb50d
Nov 24 18:18:44 compute-0 ceph-mon[74572]: mon.compute-0@-1(???) e0 preinit fsid e5ee928f-099b-569b-93c9-ecf025cbb50d
Nov 24 18:18:44 compute-0 ceph-mon[74572]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Nov 24 18:18:44 compute-0 ceph-mon[74572]: mon.compute-0@0(probing) e0 win_standalone_election
Nov 24 18:18:44 compute-0 ceph-mon[74572]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Nov 24 18:18:44 compute-0 ceph-mon[74572]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 24 18:18:44 compute-0 ceph-mon[74572]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 24 18:18:44 compute-0 ceph-mon[74572]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 24 18:18:44 compute-0 ceph-mon[74572]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 24 18:18:44 compute-0 ceph-mon[74572]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 24 18:18:44 compute-0 ceph-mon[74572]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 24 18:18:44 compute-0 ceph-mon[74572]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 18:18:44 compute-0 ceph-mon[74572]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Nov 24 18:18:44 compute-0 ceph-mon[74572]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 24 18:18:44 compute-0 ceph-mon[74572]: paxos.0).electionLogic(2) init, last seen epoch 2
Nov 24 18:18:44 compute-0 ceph-mon[74572]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 24 18:18:44 compute-0 ceph-mon[74572]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 24 18:18:44 compute-0 ceph-mon[74572]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 24 18:18:44 compute-0 ceph-mon[74572]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 24 18:18:44 compute-0 ceph-mon[74572]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-11-24T18:18:43.270587Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025,kernel_version=5.14.0-639.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,os=Linux}
Nov 24 18:18:44 compute-0 ceph-mon[74572]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 24 18:18:44 compute-0 ceph-mon[74572]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 24 18:18:44 compute-0 ceph-mon[74572]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 24 18:18:44 compute-0 ceph-mon[74572]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 24 18:18:44 compute-0 ceph-mon[74572]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 18:18:44 compute-0 ceph-mon[74572]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Nov 24 18:18:44 compute-0 ceph-mon[74572]: mon.compute-0@0(leader).mds e1 new map
Nov 24 18:18:44 compute-0 ceph-mon[74572]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Nov 24 18:18:44 compute-0 ceph-mon[74572]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 24 18:18:44 compute-0 ceph-mon[74572]: log_channel(cluster) log [DBG] : fsmap 
Nov 24 18:18:44 compute-0 ceph-mon[74572]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 24 18:18:44 compute-0 ceph-mon[74572]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 24 18:18:44 compute-0 ceph-mon[74572]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Nov 24 18:18:44 compute-0 ceph-mon[74572]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 24 18:18:44 compute-0 ceph-mon[74572]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 24 18:18:44 compute-0 ceph-mon[74572]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 24 18:18:44 compute-0 ceph-mon[74572]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 24 18:18:44 compute-0 ceph-mon[74572]: mkfs e5ee928f-099b-569b-93c9-ecf025cbb50d
Nov 24 18:18:44 compute-0 ceph-mon[74572]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Nov 24 18:18:44 compute-0 podman[74573]: 2025-11-24 18:18:44.996129145 +0000 UTC m=+0.064064550 container create 4ac3bce430c15daa950c6a4736be8839497f3fee723fbe2fe1c7b970806ee286 (image=quay.io/ceph/ceph:v18, name=inspiring_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 24 18:18:45 compute-0 ceph-mon[74572]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 24 18:18:45 compute-0 ceph-mon[74572]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 24 18:18:45 compute-0 ceph-mon[74572]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 24 18:18:45 compute-0 systemd[1]: Started libpod-conmon-4ac3bce430c15daa950c6a4736be8839497f3fee723fbe2fe1c7b970806ee286.scope.
Nov 24 18:18:45 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:18:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24cd9d752b0c7f2861011ea3d3db6848ec9c127eb06362d116c403de80da6b74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24cd9d752b0c7f2861011ea3d3db6848ec9c127eb06362d116c403de80da6b74/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24cd9d752b0c7f2861011ea3d3db6848ec9c127eb06362d116c403de80da6b74/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:45 compute-0 podman[74573]: 2025-11-24 18:18:44.972021247 +0000 UTC m=+0.039956692 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:18:45 compute-0 podman[74573]: 2025-11-24 18:18:45.077640506 +0000 UTC m=+0.145575931 container init 4ac3bce430c15daa950c6a4736be8839497f3fee723fbe2fe1c7b970806ee286 (image=quay.io/ceph/ceph:v18, name=inspiring_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 24 18:18:45 compute-0 podman[74573]: 2025-11-24 18:18:45.084062926 +0000 UTC m=+0.151998331 container start 4ac3bce430c15daa950c6a4736be8839497f3fee723fbe2fe1c7b970806ee286 (image=quay.io/ceph/ceph:v18, name=inspiring_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 24 18:18:45 compute-0 podman[74573]: 2025-11-24 18:18:45.086872865 +0000 UTC m=+0.154808310 container attach 4ac3bce430c15daa950c6a4736be8839497f3fee723fbe2fe1c7b970806ee286 (image=quay.io/ceph/ceph:v18, name=inspiring_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:18:45 compute-0 ceph-mon[74572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 24 18:18:45 compute-0 ceph-mon[74572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1437311656' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 18:18:45 compute-0 inspiring_mccarthy[74628]:   cluster:
Nov 24 18:18:45 compute-0 inspiring_mccarthy[74628]:     id:     e5ee928f-099b-569b-93c9-ecf025cbb50d
Nov 24 18:18:45 compute-0 inspiring_mccarthy[74628]:     health: HEALTH_OK
Nov 24 18:18:45 compute-0 inspiring_mccarthy[74628]:  
Nov 24 18:18:45 compute-0 inspiring_mccarthy[74628]:   services:
Nov 24 18:18:45 compute-0 inspiring_mccarthy[74628]:     mon: 1 daemons, quorum compute-0 (age 0.484226s)
Nov 24 18:18:45 compute-0 inspiring_mccarthy[74628]:     mgr: no daemons active
Nov 24 18:18:45 compute-0 inspiring_mccarthy[74628]:     osd: 0 osds: 0 up, 0 in
Nov 24 18:18:45 compute-0 inspiring_mccarthy[74628]:  
Nov 24 18:18:45 compute-0 inspiring_mccarthy[74628]:   data:
Nov 24 18:18:45 compute-0 inspiring_mccarthy[74628]:     pools:   0 pools, 0 pgs
Nov 24 18:18:45 compute-0 inspiring_mccarthy[74628]:     objects: 0 objects, 0 B
Nov 24 18:18:45 compute-0 inspiring_mccarthy[74628]:     usage:   0 B used, 0 B / 0 B avail
Nov 24 18:18:45 compute-0 inspiring_mccarthy[74628]:     pgs:     
Nov 24 18:18:45 compute-0 inspiring_mccarthy[74628]:  
Nov 24 18:18:45 compute-0 systemd[1]: libpod-4ac3bce430c15daa950c6a4736be8839497f3fee723fbe2fe1c7b970806ee286.scope: Deactivated successfully.
Nov 24 18:18:45 compute-0 podman[74573]: 2025-11-24 18:18:45.477005302 +0000 UTC m=+0.544940747 container died 4ac3bce430c15daa950c6a4736be8839497f3fee723fbe2fe1c7b970806ee286 (image=quay.io/ceph/ceph:v18, name=inspiring_mccarthy, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 18:18:45 compute-0 podman[74573]: 2025-11-24 18:18:45.514565833 +0000 UTC m=+0.582501238 container remove 4ac3bce430c15daa950c6a4736be8839497f3fee723fbe2fe1c7b970806ee286 (image=quay.io/ceph/ceph:v18, name=inspiring_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:18:45 compute-0 systemd[1]: libpod-conmon-4ac3bce430c15daa950c6a4736be8839497f3fee723fbe2fe1c7b970806ee286.scope: Deactivated successfully.
Nov 24 18:18:45 compute-0 podman[74666]: 2025-11-24 18:18:45.575422823 +0000 UTC m=+0.040933337 container create 1e06199fa9be875966b42d4fe09291389411808aa31b33a955a35ab540f5a240 (image=quay.io/ceph/ceph:v18, name=ecstatic_chaum, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 24 18:18:45 compute-0 systemd[1]: Started libpod-conmon-1e06199fa9be875966b42d4fe09291389411808aa31b33a955a35ab540f5a240.scope.
Nov 24 18:18:45 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:18:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6bb7f0ef9ca30d1e124faf17699289a0c08b8cef7a7bad29b8572ddce1bea50/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6bb7f0ef9ca30d1e124faf17699289a0c08b8cef7a7bad29b8572ddce1bea50/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6bb7f0ef9ca30d1e124faf17699289a0c08b8cef7a7bad29b8572ddce1bea50/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6bb7f0ef9ca30d1e124faf17699289a0c08b8cef7a7bad29b8572ddce1bea50/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:45 compute-0 podman[74666]: 2025-11-24 18:18:45.555630982 +0000 UTC m=+0.021141506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:18:45 compute-0 podman[74666]: 2025-11-24 18:18:45.658211146 +0000 UTC m=+0.123721740 container init 1e06199fa9be875966b42d4fe09291389411808aa31b33a955a35ab540f5a240 (image=quay.io/ceph/ceph:v18, name=ecstatic_chaum, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 24 18:18:45 compute-0 podman[74666]: 2025-11-24 18:18:45.665019005 +0000 UTC m=+0.130529519 container start 1e06199fa9be875966b42d4fe09291389411808aa31b33a955a35ab540f5a240 (image=quay.io/ceph/ceph:v18, name=ecstatic_chaum, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 18:18:45 compute-0 podman[74666]: 2025-11-24 18:18:45.668174033 +0000 UTC m=+0.133684587 container attach 1e06199fa9be875966b42d4fe09291389411808aa31b33a955a35ab540f5a240 (image=quay.io/ceph/ceph:v18, name=ecstatic_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:18:46 compute-0 ceph-mon[74572]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 24 18:18:46 compute-0 ceph-mon[74572]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 24 18:18:46 compute-0 ceph-mon[74572]: fsmap 
Nov 24 18:18:46 compute-0 ceph-mon[74572]: osdmap e1: 0 total, 0 up, 0 in
Nov 24 18:18:46 compute-0 ceph-mon[74572]: mgrmap e1: no daemons active
Nov 24 18:18:46 compute-0 ceph-mon[74572]: from='client.? 192.168.122.100:0/1437311656' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 18:18:46 compute-0 ceph-mon[74572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 24 18:18:46 compute-0 ceph-mon[74572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2974578884' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 24 18:18:46 compute-0 ceph-mon[74572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2974578884' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 24 18:18:46 compute-0 ecstatic_chaum[74682]: 
Nov 24 18:18:46 compute-0 ecstatic_chaum[74682]: [global]
Nov 24 18:18:46 compute-0 ecstatic_chaum[74682]:         fsid = e5ee928f-099b-569b-93c9-ecf025cbb50d
Nov 24 18:18:46 compute-0 ecstatic_chaum[74682]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Nov 24 18:18:46 compute-0 ecstatic_chaum[74682]:         osd_crush_chooseleaf_type = 0
Nov 24 18:18:46 compute-0 systemd[1]: libpod-1e06199fa9be875966b42d4fe09291389411808aa31b33a955a35ab540f5a240.scope: Deactivated successfully.
Nov 24 18:18:46 compute-0 podman[74666]: 2025-11-24 18:18:46.075405153 +0000 UTC m=+0.540915677 container died 1e06199fa9be875966b42d4fe09291389411808aa31b33a955a35ab540f5a240 (image=quay.io/ceph/ceph:v18, name=ecstatic_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:18:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6bb7f0ef9ca30d1e124faf17699289a0c08b8cef7a7bad29b8572ddce1bea50-merged.mount: Deactivated successfully.
Nov 24 18:18:46 compute-0 podman[74666]: 2025-11-24 18:18:46.1155705 +0000 UTC m=+0.581081014 container remove 1e06199fa9be875966b42d4fe09291389411808aa31b33a955a35ab540f5a240 (image=quay.io/ceph/ceph:v18, name=ecstatic_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 18:18:46 compute-0 systemd[1]: libpod-conmon-1e06199fa9be875966b42d4fe09291389411808aa31b33a955a35ab540f5a240.scope: Deactivated successfully.
Nov 24 18:18:46 compute-0 podman[74720]: 2025-11-24 18:18:46.1704036 +0000 UTC m=+0.036669391 container create 1bf98407d6e75106abac6f86aeb21929728513d9181c8be15643a5cb08832137 (image=quay.io/ceph/ceph:v18, name=hungry_pare, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:18:46 compute-0 systemd[1]: Started libpod-conmon-1bf98407d6e75106abac6f86aeb21929728513d9181c8be15643a5cb08832137.scope.
Nov 24 18:18:46 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:18:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dd0082c546c5993355ec683f17088cba3f0c719878a5a6a7b9a3bb877b60994/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dd0082c546c5993355ec683f17088cba3f0c719878a5a6a7b9a3bb877b60994/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dd0082c546c5993355ec683f17088cba3f0c719878a5a6a7b9a3bb877b60994/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dd0082c546c5993355ec683f17088cba3f0c719878a5a6a7b9a3bb877b60994/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:46 compute-0 podman[74720]: 2025-11-24 18:18:46.225151808 +0000 UTC m=+0.091417599 container init 1bf98407d6e75106abac6f86aeb21929728513d9181c8be15643a5cb08832137 (image=quay.io/ceph/ceph:v18, name=hungry_pare, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:18:46 compute-0 podman[74720]: 2025-11-24 18:18:46.23048936 +0000 UTC m=+0.096755151 container start 1bf98407d6e75106abac6f86aeb21929728513d9181c8be15643a5cb08832137 (image=quay.io/ceph/ceph:v18, name=hungry_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 18:18:46 compute-0 podman[74720]: 2025-11-24 18:18:46.233358311 +0000 UTC m=+0.099624122 container attach 1bf98407d6e75106abac6f86aeb21929728513d9181c8be15643a5cb08832137 (image=quay.io/ceph/ceph:v18, name=hungry_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 18:18:46 compute-0 podman[74720]: 2025-11-24 18:18:46.154636889 +0000 UTC m=+0.020902700 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:18:46 compute-0 ceph-mon[74572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:18:46 compute-0 ceph-mon[74572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4154862740' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:18:46 compute-0 systemd[1]: libpod-1bf98407d6e75106abac6f86aeb21929728513d9181c8be15643a5cb08832137.scope: Deactivated successfully.
Nov 24 18:18:46 compute-0 podman[74720]: 2025-11-24 18:18:46.603094691 +0000 UTC m=+0.469360482 container died 1bf98407d6e75106abac6f86aeb21929728513d9181c8be15643a5cb08832137 (image=quay.io/ceph/ceph:v18, name=hungry_pare, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 18:18:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-7dd0082c546c5993355ec683f17088cba3f0c719878a5a6a7b9a3bb877b60994-merged.mount: Deactivated successfully.
Nov 24 18:18:46 compute-0 podman[74720]: 2025-11-24 18:18:46.642188071 +0000 UTC m=+0.508453862 container remove 1bf98407d6e75106abac6f86aeb21929728513d9181c8be15643a5cb08832137 (image=quay.io/ceph/ceph:v18, name=hungry_pare, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 24 18:18:46 compute-0 systemd[1]: libpod-conmon-1bf98407d6e75106abac6f86aeb21929728513d9181c8be15643a5cb08832137.scope: Deactivated successfully.
Nov 24 18:18:46 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for e5ee928f-099b-569b-93c9-ecf025cbb50d...
Nov 24 18:18:46 compute-0 ceph-mon[74572]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 24 18:18:46 compute-0 ceph-mon[74572]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 24 18:18:46 compute-0 ceph-mon[74572]: mon.compute-0@0(leader) e1 shutdown
Nov 24 18:18:46 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0[74568]: 2025-11-24T18:18:46.802+0000 7fed6ffcf640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 24 18:18:46 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0[74568]: 2025-11-24T18:18:46.802+0000 7fed6ffcf640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 24 18:18:46 compute-0 ceph-mon[74572]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 24 18:18:46 compute-0 ceph-mon[74572]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 24 18:18:47 compute-0 podman[74807]: 2025-11-24 18:18:47.016328011 +0000 UTC m=+0.240509347 container died 5efd0838c252a6726be084a5a3e77f2b53c37f06f40551e6853ca14688755acf (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 18:18:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc8a18acab957d2d39d4f0ba9f95371e45b097f9aedc3914bbea3902be2f8e52-merged.mount: Deactivated successfully.
Nov 24 18:18:47 compute-0 podman[74807]: 2025-11-24 18:18:47.049951785 +0000 UTC m=+0.274133121 container remove 5efd0838c252a6726be084a5a3e77f2b53c37f06f40551e6853ca14688755acf (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:18:47 compute-0 bash[74807]: ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0
Nov 24 18:18:47 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 18:18:47 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 18:18:47 compute-0 systemd[1]: ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d@mon.compute-0.service: Deactivated successfully.
Nov 24 18:18:47 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for e5ee928f-099b-569b-93c9-ecf025cbb50d.
Nov 24 18:18:47 compute-0 systemd[1]: Starting Ceph mon.compute-0 for e5ee928f-099b-569b-93c9-ecf025cbb50d...
Nov 24 18:18:47 compute-0 podman[74908]: 2025-11-24 18:18:47.343813642 +0000 UTC m=+0.035253015 container create 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:18:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7adb674b6d57fd032ad5bb9b8bc1f2f5a488878b6697c450e0fd8b76abe39601/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7adb674b6d57fd032ad5bb9b8bc1f2f5a488878b6697c450e0fd8b76abe39601/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7adb674b6d57fd032ad5bb9b8bc1f2f5a488878b6697c450e0fd8b76abe39601/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7adb674b6d57fd032ad5bb9b8bc1f2f5a488878b6697c450e0fd8b76abe39601/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:47 compute-0 podman[74908]: 2025-11-24 18:18:47.388394298 +0000 UTC m=+0.079833691 container init 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 18:18:47 compute-0 podman[74908]: 2025-11-24 18:18:47.393487834 +0000 UTC m=+0.084927207 container start 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 18:18:47 compute-0 bash[74908]: 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d
Nov 24 18:18:47 compute-0 podman[74908]: 2025-11-24 18:18:47.328760839 +0000 UTC m=+0.020200232 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:18:47 compute-0 systemd[1]: Started Ceph mon.compute-0 for e5ee928f-099b-569b-93c9-ecf025cbb50d.
Nov 24 18:18:47 compute-0 ceph-mon[74927]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 18:18:47 compute-0 ceph-mon[74927]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 24 18:18:47 compute-0 ceph-mon[74927]: pidfile_write: ignore empty --pid-file
Nov 24 18:18:47 compute-0 ceph-mon[74927]: load: jerasure load: lrc 
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: RocksDB version: 7.9.2
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: Git sha 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: DB SUMMARY
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: DB Session ID:  WW3CBZDUF00LP3K0CKDH
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: CURRENT file:  CURRENT
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: IDENTITY file:  IDENTITY
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 52078 ; 
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                         Options.error_if_exists: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                       Options.create_if_missing: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                         Options.paranoid_checks: 1
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                                     Options.env: 0x562aef2cfc40
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                                Options.info_log: 0x562af0d05040
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                Options.max_file_opening_threads: 16
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                              Options.statistics: (nil)
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                               Options.use_fsync: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                       Options.max_log_file_size: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                         Options.allow_fallocate: 1
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                        Options.use_direct_reads: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:          Options.create_missing_column_families: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                              Options.db_log_dir: 
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                                 Options.wal_dir: 
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                   Options.advise_random_on_open: 1
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                    Options.write_buffer_manager: 0x562af0d14b40
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                            Options.rate_limiter: (nil)
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                  Options.unordered_write: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                               Options.row_cache: None
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                              Options.wal_filter: None
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:             Options.allow_ingest_behind: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:             Options.two_write_queues: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:             Options.manual_wal_flush: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:             Options.wal_compression: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:             Options.atomic_flush: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                 Options.log_readahead_size: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:             Options.allow_data_in_errors: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:             Options.db_host_id: __hostname__
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:             Options.max_background_jobs: 2
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:             Options.max_background_compactions: -1
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:             Options.max_subcompactions: 1
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:             Options.max_total_wal_size: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                          Options.max_open_files: -1
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                          Options.bytes_per_sync: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:       Options.compaction_readahead_size: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                  Options.max_background_flushes: -1
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: Compression algorithms supported:
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:         kZSTD supported: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:         kXpressCompression supported: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:         kBZip2Compression supported: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:         kLZ4Compression supported: 1
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:         kZlibCompression supported: 1
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:         kLZ4HCCompression supported: 1
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:         kSnappyCompression supported: 1
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:           Options.merge_operator: 
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:        Options.compaction_filter: None
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562af0d04c40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562af0cfd1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:        Options.write_buffer_size: 33554432
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:  Options.max_write_buffer_number: 2
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:          Options.compression: NoCompression
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:             Options.num_levels: 7
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5bcbf129-cc59-4441-a37f-051fd374ef44
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008327427704, "job": 1, "event": "recovery_started", "wal_files": [9]}
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008327429768, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 51794, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 129, "table_properties": {"data_size": 50351, "index_size": 149, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 2940, "raw_average_key_size": 30, "raw_value_size": 48030, "raw_average_value_size": 500, "num_data_blocks": 7, "num_entries": 96, "num_filter_entries": 96, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008327, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008327429855, "job": 1, "event": "recovery_finished"}
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x562af0d26e00
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: DB pointer 0x562af0e2e000
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 18:18:47 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   52.48 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     28.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0   52.48 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     28.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     28.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     28.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 4.97 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 4.97 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562af0cfd1f0#2 capacity: 512.00 MB usage: 0.77 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.34 KB,6.55651e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 24 18:18:47 compute-0 ceph-mon[74927]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid e5ee928f-099b-569b-93c9-ecf025cbb50d
Nov 24 18:18:47 compute-0 ceph-mon[74927]: mon.compute-0@-1(???) e1 preinit fsid e5ee928f-099b-569b-93c9-ecf025cbb50d
Nov 24 18:18:47 compute-0 ceph-mon[74927]: mon.compute-0@-1(???).mds e1 new map
Nov 24 18:18:47 compute-0 ceph-mon[74927]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Nov 24 18:18:47 compute-0 ceph-mon[74927]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 24 18:18:47 compute-0 ceph-mon[74927]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 24 18:18:47 compute-0 ceph-mon[74927]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 24 18:18:47 compute-0 ceph-mon[74927]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 24 18:18:47 compute-0 ceph-mon[74927]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Nov 24 18:18:47 compute-0 ceph-mon[74927]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Nov 24 18:18:47 compute-0 ceph-mon[74927]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 24 18:18:47 compute-0 ceph-mon[74927]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Nov 24 18:18:47 compute-0 ceph-mon[74927]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 24 18:18:47 compute-0 ceph-mon[74927]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 24 18:18:47 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 24 18:18:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 24 18:18:47 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : fsmap 
Nov 24 18:18:47 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 24 18:18:47 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 24 18:18:47 compute-0 podman[74928]: 2025-11-24 18:18:47.486975583 +0000 UTC m=+0.055760654 container create 5015cb42b9db880408ba603f90feb19254c1e9044fa7d7b1cc3c8e523c838a1b (image=quay.io/ceph/ceph:v18, name=dazzling_shockley, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 18:18:47 compute-0 ceph-mon[74927]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 24 18:18:47 compute-0 ceph-mon[74927]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 24 18:18:47 compute-0 ceph-mon[74927]: fsmap 
Nov 24 18:18:47 compute-0 ceph-mon[74927]: osdmap e1: 0 total, 0 up, 0 in
Nov 24 18:18:47 compute-0 ceph-mon[74927]: mgrmap e1: no daemons active
Nov 24 18:18:47 compute-0 systemd[1]: Started libpod-conmon-5015cb42b9db880408ba603f90feb19254c1e9044fa7d7b1cc3c8e523c838a1b.scope.
Nov 24 18:18:47 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:18:47 compute-0 podman[74928]: 2025-11-24 18:18:47.469571771 +0000 UTC m=+0.038356822 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:18:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0c6787e2f598e5ef98bb043eb3a0563e76f1ab2e37da4259490c7479f76381b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0c6787e2f598e5ef98bb043eb3a0563e76f1ab2e37da4259490c7479f76381b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0c6787e2f598e5ef98bb043eb3a0563e76f1ab2e37da4259490c7479f76381b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:47 compute-0 podman[74928]: 2025-11-24 18:18:47.581847496 +0000 UTC m=+0.150632597 container init 5015cb42b9db880408ba603f90feb19254c1e9044fa7d7b1cc3c8e523c838a1b (image=quay.io/ceph/ceph:v18, name=dazzling_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 18:18:47 compute-0 podman[74928]: 2025-11-24 18:18:47.587837905 +0000 UTC m=+0.156622946 container start 5015cb42b9db880408ba603f90feb19254c1e9044fa7d7b1cc3c8e523c838a1b (image=quay.io/ceph/ceph:v18, name=dazzling_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 18:18:47 compute-0 podman[74928]: 2025-11-24 18:18:47.591714121 +0000 UTC m=+0.160499162 container attach 5015cb42b9db880408ba603f90feb19254c1e9044fa7d7b1cc3c8e523c838a1b (image=quay.io/ceph/ceph:v18, name=dazzling_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Nov 24 18:18:48 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Nov 24 18:18:48 compute-0 systemd[1]: libpod-5015cb42b9db880408ba603f90feb19254c1e9044fa7d7b1cc3c8e523c838a1b.scope: Deactivated successfully.
Nov 24 18:18:48 compute-0 podman[74928]: 2025-11-24 18:18:48.027385227 +0000 UTC m=+0.596170258 container died 5015cb42b9db880408ba603f90feb19254c1e9044fa7d7b1cc3c8e523c838a1b (image=quay.io/ceph/ceph:v18, name=dazzling_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:18:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0c6787e2f598e5ef98bb043eb3a0563e76f1ab2e37da4259490c7479f76381b-merged.mount: Deactivated successfully.
Nov 24 18:18:48 compute-0 podman[74928]: 2025-11-24 18:18:48.07912762 +0000 UTC m=+0.647912701 container remove 5015cb42b9db880408ba603f90feb19254c1e9044fa7d7b1cc3c8e523c838a1b (image=quay.io/ceph/ceph:v18, name=dazzling_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:18:48 compute-0 systemd[1]: libpod-conmon-5015cb42b9db880408ba603f90feb19254c1e9044fa7d7b1cc3c8e523c838a1b.scope: Deactivated successfully.
Nov 24 18:18:48 compute-0 podman[75020]: 2025-11-24 18:18:48.162232261 +0000 UTC m=+0.045688584 container create 89698cdc826e9b3a1ea7f34820b285da92a1c9a52d893e025ddc008a5bf038a8 (image=quay.io/ceph/ceph:v18, name=ecstatic_bose, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 18:18:48 compute-0 systemd[1]: Started libpod-conmon-89698cdc826e9b3a1ea7f34820b285da92a1c9a52d893e025ddc008a5bf038a8.scope.
Nov 24 18:18:48 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:18:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3ac6a212b8dc3b67ee30bec0b38bddb9f65760b822dc03877a698c5d5b0a3d1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3ac6a212b8dc3b67ee30bec0b38bddb9f65760b822dc03877a698c5d5b0a3d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3ac6a212b8dc3b67ee30bec0b38bddb9f65760b822dc03877a698c5d5b0a3d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:48 compute-0 podman[75020]: 2025-11-24 18:18:48.145492426 +0000 UTC m=+0.028948739 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:18:48 compute-0 podman[75020]: 2025-11-24 18:18:48.246365618 +0000 UTC m=+0.129822011 container init 89698cdc826e9b3a1ea7f34820b285da92a1c9a52d893e025ddc008a5bf038a8 (image=quay.io/ceph/ceph:v18, name=ecstatic_bose, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:18:48 compute-0 podman[75020]: 2025-11-24 18:18:48.250794648 +0000 UTC m=+0.134250991 container start 89698cdc826e9b3a1ea7f34820b285da92a1c9a52d893e025ddc008a5bf038a8 (image=quay.io/ceph/ceph:v18, name=ecstatic_bose, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:18:48 compute-0 podman[75020]: 2025-11-24 18:18:48.254726195 +0000 UTC m=+0.138182538 container attach 89698cdc826e9b3a1ea7f34820b285da92a1c9a52d893e025ddc008a5bf038a8 (image=quay.io/ceph/ceph:v18, name=ecstatic_bose, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 18:18:48 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Nov 24 18:18:48 compute-0 systemd[1]: libpod-89698cdc826e9b3a1ea7f34820b285da92a1c9a52d893e025ddc008a5bf038a8.scope: Deactivated successfully.
Nov 24 18:18:48 compute-0 podman[75020]: 2025-11-24 18:18:48.671512823 +0000 UTC m=+0.554969206 container died 89698cdc826e9b3a1ea7f34820b285da92a1c9a52d893e025ddc008a5bf038a8 (image=quay.io/ceph/ceph:v18, name=ecstatic_bose, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 18:18:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3ac6a212b8dc3b67ee30bec0b38bddb9f65760b822dc03877a698c5d5b0a3d1-merged.mount: Deactivated successfully.
Nov 24 18:18:48 compute-0 podman[75020]: 2025-11-24 18:18:48.728753863 +0000 UTC m=+0.612210196 container remove 89698cdc826e9b3a1ea7f34820b285da92a1c9a52d893e025ddc008a5bf038a8 (image=quay.io/ceph/ceph:v18, name=ecstatic_bose, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 18:18:48 compute-0 systemd[1]: libpod-conmon-89698cdc826e9b3a1ea7f34820b285da92a1c9a52d893e025ddc008a5bf038a8.scope: Deactivated successfully.
Nov 24 18:18:48 compute-0 systemd[1]: Reloading.
Nov 24 18:18:48 compute-0 systemd-sysv-generator[75103]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:18:48 compute-0 systemd-rc-local-generator[75099]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:18:49 compute-0 systemd[1]: Reloading.
Nov 24 18:18:49 compute-0 systemd-rc-local-generator[75139]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:18:49 compute-0 systemd-sysv-generator[75143]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:18:49 compute-0 systemd[1]: Starting Ceph mgr.compute-0.dfqptp for e5ee928f-099b-569b-93c9-ecf025cbb50d...
Nov 24 18:18:49 compute-0 podman[75199]: 2025-11-24 18:18:49.634565899 +0000 UTC m=+0.065841804 container create 9eef9f776910beb7e6266469ef16ac3700a5a8c6b4085baaa34e42834d3065ec (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 18:18:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b845f98033c45bfaf39f84ded92c28d317ea5728d8257bc2709d1ffecb44de5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b845f98033c45bfaf39f84ded92c28d317ea5728d8257bc2709d1ffecb44de5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b845f98033c45bfaf39f84ded92c28d317ea5728d8257bc2709d1ffecb44de5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b845f98033c45bfaf39f84ded92c28d317ea5728d8257bc2709d1ffecb44de5e/merged/var/lib/ceph/mgr/ceph-compute-0.dfqptp supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:49 compute-0 podman[75199]: 2025-11-24 18:18:49.692176098 +0000 UTC m=+0.123452013 container init 9eef9f776910beb7e6266469ef16ac3700a5a8c6b4085baaa34e42834d3065ec (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 24 18:18:49 compute-0 podman[75199]: 2025-11-24 18:18:49.701906559 +0000 UTC m=+0.133182454 container start 9eef9f776910beb7e6266469ef16ac3700a5a8c6b4085baaa34e42834d3065ec (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 24 18:18:49 compute-0 podman[75199]: 2025-11-24 18:18:49.609741863 +0000 UTC m=+0.041017798 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:18:49 compute-0 bash[75199]: 9eef9f776910beb7e6266469ef16ac3700a5a8c6b4085baaa34e42834d3065ec
Nov 24 18:18:49 compute-0 systemd[1]: Started Ceph mgr.compute-0.dfqptp for e5ee928f-099b-569b-93c9-ecf025cbb50d.
Nov 24 18:18:49 compute-0 ceph-mgr[75218]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 18:18:49 compute-0 ceph-mgr[75218]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 24 18:18:49 compute-0 ceph-mgr[75218]: pidfile_write: ignore empty --pid-file
Nov 24 18:18:49 compute-0 podman[75219]: 2025-11-24 18:18:49.822640034 +0000 UTC m=+0.063432545 container create abafd915e23be530285c5b7111ad4d7ff886aa7eebf68d79f9780be973716523 (image=quay.io/ceph/ceph:v18, name=priceless_jennings, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 18:18:49 compute-0 systemd[1]: Started libpod-conmon-abafd915e23be530285c5b7111ad4d7ff886aa7eebf68d79f9780be973716523.scope.
Nov 24 18:18:49 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:18:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c31705bf79522c50c3bf1f55f6f506b16261a28ccb0241955ebafd688445ffab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c31705bf79522c50c3bf1f55f6f506b16261a28ccb0241955ebafd688445ffab/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c31705bf79522c50c3bf1f55f6f506b16261a28ccb0241955ebafd688445ffab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:49 compute-0 podman[75219]: 2025-11-24 18:18:49.805271863 +0000 UTC m=+0.046064354 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:18:49 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'alerts'
Nov 24 18:18:49 compute-0 podman[75219]: 2025-11-24 18:18:49.922710806 +0000 UTC m=+0.163503287 container init abafd915e23be530285c5b7111ad4d7ff886aa7eebf68d79f9780be973716523 (image=quay.io/ceph/ceph:v18, name=priceless_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 18:18:49 compute-0 podman[75219]: 2025-11-24 18:18:49.931726999 +0000 UTC m=+0.172519510 container start abafd915e23be530285c5b7111ad4d7ff886aa7eebf68d79f9780be973716523 (image=quay.io/ceph/ceph:v18, name=priceless_jennings, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 18:18:49 compute-0 podman[75219]: 2025-11-24 18:18:49.93537351 +0000 UTC m=+0.176166001 container attach abafd915e23be530285c5b7111ad4d7ff886aa7eebf68d79f9780be973716523 (image=quay.io/ceph/ceph:v18, name=priceless_jennings, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 24 18:18:50 compute-0 ceph-mgr[75218]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 18:18:50 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'balancer'
Nov 24 18:18:50 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:18:50.212+0000 7f1ca3816140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 18:18:50 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 24 18:18:50 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2861444832' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 18:18:50 compute-0 priceless_jennings[75259]: 
Nov 24 18:18:50 compute-0 priceless_jennings[75259]: {
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:     "fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:     "health": {
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:         "status": "HEALTH_OK",
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:         "checks": {},
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:         "mutes": []
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:     },
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:     "election_epoch": 5,
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:     "quorum": [
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:         0
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:     ],
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:     "quorum_names": [
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:         "compute-0"
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:     ],
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:     "quorum_age": 2,
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:     "monmap": {
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:         "epoch": 1,
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:         "min_mon_release_name": "reef",
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:         "num_mons": 1
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:     },
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:     "osdmap": {
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:         "epoch": 1,
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:         "num_osds": 0,
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:         "num_up_osds": 0,
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:         "osd_up_since": 0,
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:         "num_in_osds": 0,
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:         "osd_in_since": 0,
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:         "num_remapped_pgs": 0
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:     },
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:     "pgmap": {
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:         "pgs_by_state": [],
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:         "num_pgs": 0,
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:         "num_pools": 0,
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:         "num_objects": 0,
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:         "data_bytes": 0,
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:         "bytes_used": 0,
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:         "bytes_avail": 0,
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:         "bytes_total": 0
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:     },
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:     "fsmap": {
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:         "epoch": 1,
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:         "by_rank": [],
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:         "up:standby": 0
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:     },
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:     "mgrmap": {
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:         "available": false,
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:         "num_standbys": 0,
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:         "modules": [
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:             "iostat",
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:             "nfs",
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:             "restful"
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:         ],
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:         "services": {}
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:     },
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:     "servicemap": {
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:         "epoch": 1,
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:         "modified": "2025-11-24T18:18:44.978620+0000",
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:         "services": {}
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:     },
Nov 24 18:18:50 compute-0 priceless_jennings[75259]:     "progress_events": {}
Nov 24 18:18:50 compute-0 priceless_jennings[75259]: }
Nov 24 18:18:50 compute-0 systemd[1]: libpod-abafd915e23be530285c5b7111ad4d7ff886aa7eebf68d79f9780be973716523.scope: Deactivated successfully.
Nov 24 18:18:50 compute-0 podman[75219]: 2025-11-24 18:18:50.334568441 +0000 UTC m=+0.575360912 container died abafd915e23be530285c5b7111ad4d7ff886aa7eebf68d79f9780be973716523 (image=quay.io/ceph/ceph:v18, name=priceless_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:18:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-c31705bf79522c50c3bf1f55f6f506b16261a28ccb0241955ebafd688445ffab-merged.mount: Deactivated successfully.
Nov 24 18:18:50 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2861444832' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 18:18:50 compute-0 podman[75219]: 2025-11-24 18:18:50.377381873 +0000 UTC m=+0.618174364 container remove abafd915e23be530285c5b7111ad4d7ff886aa7eebf68d79f9780be973716523 (image=quay.io/ceph/ceph:v18, name=priceless_jennings, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 18:18:50 compute-0 systemd[1]: libpod-conmon-abafd915e23be530285c5b7111ad4d7ff886aa7eebf68d79f9780be973716523.scope: Deactivated successfully.
Nov 24 18:18:50 compute-0 ceph-mgr[75218]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 18:18:50 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'cephadm'
Nov 24 18:18:50 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:18:50.523+0000 7f1ca3816140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 18:18:52 compute-0 podman[75311]: 2025-11-24 18:18:52.452871389 +0000 UTC m=+0.048187696 container create 5c3815131edaa6909ff90dc448770e47cefecc3f61b46ecadd9c47ea9aa0d5cb (image=quay.io/ceph/ceph:v18, name=quirky_shockley, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:18:52 compute-0 systemd[1]: Started libpod-conmon-5c3815131edaa6909ff90dc448770e47cefecc3f61b46ecadd9c47ea9aa0d5cb.scope.
Nov 24 18:18:52 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:18:52 compute-0 podman[75311]: 2025-11-24 18:18:52.426751142 +0000 UTC m=+0.022067459 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:18:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1d47cb8a0b5de75cdfd19be374299a8a4869afddca4a276aa41ca1d6d8cbad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1d47cb8a0b5de75cdfd19be374299a8a4869afddca4a276aa41ca1d6d8cbad/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1d47cb8a0b5de75cdfd19be374299a8a4869afddca4a276aa41ca1d6d8cbad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:52 compute-0 podman[75311]: 2025-11-24 18:18:52.536660758 +0000 UTC m=+0.131977065 container init 5c3815131edaa6909ff90dc448770e47cefecc3f61b46ecadd9c47ea9aa0d5cb (image=quay.io/ceph/ceph:v18, name=quirky_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:18:52 compute-0 podman[75311]: 2025-11-24 18:18:52.543000865 +0000 UTC m=+0.138317212 container start 5c3815131edaa6909ff90dc448770e47cefecc3f61b46ecadd9c47ea9aa0d5cb (image=quay.io/ceph/ceph:v18, name=quirky_shockley, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 18:18:52 compute-0 podman[75311]: 2025-11-24 18:18:52.547717102 +0000 UTC m=+0.143033409 container attach 5c3815131edaa6909ff90dc448770e47cefecc3f61b46ecadd9c47ea9aa0d5cb (image=quay.io/ceph/ceph:v18, name=quirky_shockley, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 18:18:52 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'crash'
Nov 24 18:18:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 24 18:18:52 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3006523604' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 18:18:52 compute-0 quirky_shockley[75328]: 
Nov 24 18:18:52 compute-0 quirky_shockley[75328]: {
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:     "fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:     "health": {
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:         "status": "HEALTH_OK",
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:         "checks": {},
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:         "mutes": []
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:     },
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:     "election_epoch": 5,
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:     "quorum": [
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:         0
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:     ],
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:     "quorum_names": [
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:         "compute-0"
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:     ],
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:     "quorum_age": 5,
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:     "monmap": {
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:         "epoch": 1,
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:         "min_mon_release_name": "reef",
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:         "num_mons": 1
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:     },
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:     "osdmap": {
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:         "epoch": 1,
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:         "num_osds": 0,
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:         "num_up_osds": 0,
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:         "osd_up_since": 0,
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:         "num_in_osds": 0,
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:         "osd_in_since": 0,
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:         "num_remapped_pgs": 0
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:     },
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:     "pgmap": {
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:         "pgs_by_state": [],
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:         "num_pgs": 0,
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:         "num_pools": 0,
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:         "num_objects": 0,
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:         "data_bytes": 0,
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:         "bytes_used": 0,
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:         "bytes_avail": 0,
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:         "bytes_total": 0
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:     },
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:     "fsmap": {
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:         "epoch": 1,
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:         "by_rank": [],
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:         "up:standby": 0
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:     },
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:     "mgrmap": {
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:         "available": false,
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:         "num_standbys": 0,
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:         "modules": [
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:             "iostat",
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:             "nfs",
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:             "restful"
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:         ],
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:         "services": {}
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:     },
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:     "servicemap": {
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:         "epoch": 1,
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:         "modified": "2025-11-24T18:18:44.978620+0000",
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:         "services": {}
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:     },
Nov 24 18:18:52 compute-0 quirky_shockley[75328]:     "progress_events": {}
Nov 24 18:18:52 compute-0 quirky_shockley[75328]: }
Nov 24 18:18:52 compute-0 systemd[1]: libpod-5c3815131edaa6909ff90dc448770e47cefecc3f61b46ecadd9c47ea9aa0d5cb.scope: Deactivated successfully.
Nov 24 18:18:52 compute-0 podman[75311]: 2025-11-24 18:18:52.956428839 +0000 UTC m=+0.551745156 container died 5c3815131edaa6909ff90dc448770e47cefecc3f61b46ecadd9c47ea9aa0d5cb (image=quay.io/ceph/ceph:v18, name=quirky_shockley, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:18:52 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:18:52.957+0000 7f1ca3816140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 18:18:52 compute-0 ceph-mgr[75218]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 18:18:52 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'dashboard'
Nov 24 18:18:52 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3006523604' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 18:18:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff1d47cb8a0b5de75cdfd19be374299a8a4869afddca4a276aa41ca1d6d8cbad-merged.mount: Deactivated successfully.
Nov 24 18:18:53 compute-0 podman[75311]: 2025-11-24 18:18:53.004749977 +0000 UTC m=+0.600066284 container remove 5c3815131edaa6909ff90dc448770e47cefecc3f61b46ecadd9c47ea9aa0d5cb (image=quay.io/ceph/ceph:v18, name=quirky_shockley, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:18:53 compute-0 systemd[1]: libpod-conmon-5c3815131edaa6909ff90dc448770e47cefecc3f61b46ecadd9c47ea9aa0d5cb.scope: Deactivated successfully.
Nov 24 18:18:54 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'devicehealth'
Nov 24 18:18:54 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:18:54.669+0000 7f1ca3816140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 18:18:54 compute-0 ceph-mgr[75218]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 18:18:54 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'diskprediction_local'
Nov 24 18:18:55 compute-0 podman[75366]: 2025-11-24 18:18:55.082465559 +0000 UTC m=+0.039343067 container create ed37a08c2e12a5522d21f12bc395f926339ee6380c58104c087c64a4b6e4297c (image=quay.io/ceph/ceph:v18, name=friendly_yalow, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 24 18:18:55 compute-0 systemd[1]: Started libpod-conmon-ed37a08c2e12a5522d21f12bc395f926339ee6380c58104c087c64a4b6e4297c.scope.
Nov 24 18:18:55 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:18:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30b45e9beb5ce14d4310c6381014df7801ea16001851d821c39c140c0eac3d08/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30b45e9beb5ce14d4310c6381014df7801ea16001851d821c39c140c0eac3d08/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30b45e9beb5ce14d4310c6381014df7801ea16001851d821c39c140c0eac3d08/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:55 compute-0 podman[75366]: 2025-11-24 18:18:55.064574445 +0000 UTC m=+0.021451953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:18:55 compute-0 podman[75366]: 2025-11-24 18:18:55.168210936 +0000 UTC m=+0.125088464 container init ed37a08c2e12a5522d21f12bc395f926339ee6380c58104c087c64a4b6e4297c (image=quay.io/ceph/ceph:v18, name=friendly_yalow, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:18:55 compute-0 podman[75366]: 2025-11-24 18:18:55.175552668 +0000 UTC m=+0.132430176 container start ed37a08c2e12a5522d21f12bc395f926339ee6380c58104c087c64a4b6e4297c (image=quay.io/ceph/ceph:v18, name=friendly_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 18:18:55 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 24 18:18:55 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 24 18:18:55 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]:   from numpy import show_config as show_numpy_config
Nov 24 18:18:55 compute-0 podman[75366]: 2025-11-24 18:18:55.179117286 +0000 UTC m=+0.135994814 container attach ed37a08c2e12a5522d21f12bc395f926339ee6380c58104c087c64a4b6e4297c (image=quay.io/ceph/ceph:v18, name=friendly_yalow, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 18:18:55 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:18:55.187+0000 7f1ca3816140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 18:18:55 compute-0 ceph-mgr[75218]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 18:18:55 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'influx'
Nov 24 18:18:55 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:18:55.436+0000 7f1ca3816140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 18:18:55 compute-0 ceph-mgr[75218]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 18:18:55 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'insights'
Nov 24 18:18:55 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 24 18:18:55 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1034015369' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 18:18:55 compute-0 friendly_yalow[75382]: 
Nov 24 18:18:55 compute-0 friendly_yalow[75382]: {
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:     "fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:     "health": {
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:         "status": "HEALTH_OK",
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:         "checks": {},
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:         "mutes": []
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:     },
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:     "election_epoch": 5,
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:     "quorum": [
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:         0
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:     ],
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:     "quorum_names": [
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:         "compute-0"
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:     ],
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:     "quorum_age": 8,
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:     "monmap": {
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:         "epoch": 1,
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:         "min_mon_release_name": "reef",
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:         "num_mons": 1
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:     },
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:     "osdmap": {
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:         "epoch": 1,
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:         "num_osds": 0,
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:         "num_up_osds": 0,
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:         "osd_up_since": 0,
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:         "num_in_osds": 0,
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:         "osd_in_since": 0,
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:         "num_remapped_pgs": 0
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:     },
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:     "pgmap": {
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:         "pgs_by_state": [],
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:         "num_pgs": 0,
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:         "num_pools": 0,
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:         "num_objects": 0,
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:         "data_bytes": 0,
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:         "bytes_used": 0,
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:         "bytes_avail": 0,
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:         "bytes_total": 0
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:     },
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:     "fsmap": {
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:         "epoch": 1,
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:         "by_rank": [],
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:         "up:standby": 0
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:     },
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:     "mgrmap": {
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:         "available": false,
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:         "num_standbys": 0,
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:         "modules": [
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:             "iostat",
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:             "nfs",
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:             "restful"
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:         ],
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:         "services": {}
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:     },
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:     "servicemap": {
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:         "epoch": 1,
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:         "modified": "2025-11-24T18:18:44.978620+0000",
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:         "services": {}
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:     },
Nov 24 18:18:55 compute-0 friendly_yalow[75382]:     "progress_events": {}
Nov 24 18:18:55 compute-0 friendly_yalow[75382]: }
Nov 24 18:18:55 compute-0 systemd[1]: libpod-ed37a08c2e12a5522d21f12bc395f926339ee6380c58104c087c64a4b6e4297c.scope: Deactivated successfully.
Nov 24 18:18:55 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1034015369' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 18:18:55 compute-0 podman[75408]: 2025-11-24 18:18:55.665008008 +0000 UTC m=+0.026455627 container died ed37a08c2e12a5522d21f12bc395f926339ee6380c58104c087c64a4b6e4297c (image=quay.io/ceph/ceph:v18, name=friendly_yalow, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 18:18:55 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'iostat'
Nov 24 18:18:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-30b45e9beb5ce14d4310c6381014df7801ea16001851d821c39c140c0eac3d08-merged.mount: Deactivated successfully.
Nov 24 18:18:55 compute-0 podman[75408]: 2025-11-24 18:18:55.711024369 +0000 UTC m=+0.072471968 container remove ed37a08c2e12a5522d21f12bc395f926339ee6380c58104c087c64a4b6e4297c (image=quay.io/ceph/ceph:v18, name=friendly_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:18:55 compute-0 systemd[1]: libpod-conmon-ed37a08c2e12a5522d21f12bc395f926339ee6380c58104c087c64a4b6e4297c.scope: Deactivated successfully.
Nov 24 18:18:55 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:18:55.907+0000 7f1ca3816140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 18:18:55 compute-0 ceph-mgr[75218]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 18:18:55 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'k8sevents'
Nov 24 18:18:57 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'localpool'
Nov 24 18:18:57 compute-0 podman[75421]: 2025-11-24 18:18:57.829158123 +0000 UTC m=+0.055273751 container create 480acad95ab7e59f7f8eada69aa4598da5533bbe4d360ea506dd106ed9114bb0 (image=quay.io/ceph/ceph:v18, name=unruffled_mcnulty, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 24 18:18:57 compute-0 systemd[1]: Started libpod-conmon-480acad95ab7e59f7f8eada69aa4598da5533bbe4d360ea506dd106ed9114bb0.scope.
Nov 24 18:18:57 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:18:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/657ee8b255f4ac1e7ec4396e02a2065ee59f367fc8b7243cd6e152a320d4b783/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/657ee8b255f4ac1e7ec4396e02a2065ee59f367fc8b7243cd6e152a320d4b783/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/657ee8b255f4ac1e7ec4396e02a2065ee59f367fc8b7243cd6e152a320d4b783/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:18:57 compute-0 podman[75421]: 2025-11-24 18:18:57.812198973 +0000 UTC m=+0.038314611 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:18:57 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'mds_autoscaler'
Nov 24 18:18:57 compute-0 podman[75421]: 2025-11-24 18:18:57.920425127 +0000 UTC m=+0.146540785 container init 480acad95ab7e59f7f8eada69aa4598da5533bbe4d360ea506dd106ed9114bb0 (image=quay.io/ceph/ceph:v18, name=unruffled_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:18:57 compute-0 podman[75421]: 2025-11-24 18:18:57.927291587 +0000 UTC m=+0.153407195 container start 480acad95ab7e59f7f8eada69aa4598da5533bbe4d360ea506dd106ed9114bb0 (image=quay.io/ceph/ceph:v18, name=unruffled_mcnulty, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:18:57 compute-0 podman[75421]: 2025-11-24 18:18:57.930299462 +0000 UTC m=+0.156415140 container attach 480acad95ab7e59f7f8eada69aa4598da5533bbe4d360ea506dd106ed9114bb0 (image=quay.io/ceph/ceph:v18, name=unruffled_mcnulty, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 24 18:18:58 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 24 18:18:58 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2975182773' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]: 
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]: {
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:     "fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:     "health": {
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:         "status": "HEALTH_OK",
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:         "checks": {},
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:         "mutes": []
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:     },
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:     "election_epoch": 5,
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:     "quorum": [
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:         0
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:     ],
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:     "quorum_names": [
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:         "compute-0"
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:     ],
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:     "quorum_age": 10,
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:     "monmap": {
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:         "epoch": 1,
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:         "min_mon_release_name": "reef",
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:         "num_mons": 1
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:     },
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:     "osdmap": {
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:         "epoch": 1,
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:         "num_osds": 0,
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:         "num_up_osds": 0,
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:         "osd_up_since": 0,
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:         "num_in_osds": 0,
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:         "osd_in_since": 0,
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:         "num_remapped_pgs": 0
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:     },
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:     "pgmap": {
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:         "pgs_by_state": [],
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:         "num_pgs": 0,
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:         "num_pools": 0,
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:         "num_objects": 0,
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:         "data_bytes": 0,
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:         "bytes_used": 0,
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:         "bytes_avail": 0,
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:         "bytes_total": 0
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:     },
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:     "fsmap": {
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:         "epoch": 1,
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:         "by_rank": [],
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:         "up:standby": 0
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:     },
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:     "mgrmap": {
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:         "available": false,
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:         "num_standbys": 0,
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:         "modules": [
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:             "iostat",
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:             "nfs",
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:             "restful"
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:         ],
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:         "services": {}
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:     },
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:     "servicemap": {
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:         "epoch": 1,
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:         "modified": "2025-11-24T18:18:44.978620+0000",
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:         "services": {}
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:     },
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]:     "progress_events": {}
Nov 24 18:18:58 compute-0 unruffled_mcnulty[75437]: }
Nov 24 18:18:58 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'mirroring'
Nov 24 18:18:58 compute-0 systemd[1]: libpod-480acad95ab7e59f7f8eada69aa4598da5533bbe4d360ea506dd106ed9114bb0.scope: Deactivated successfully.
Nov 24 18:18:58 compute-0 podman[75421]: 2025-11-24 18:18:58.610016771 +0000 UTC m=+0.836132399 container died 480acad95ab7e59f7f8eada69aa4598da5533bbe4d360ea506dd106ed9114bb0 (image=quay.io/ceph/ceph:v18, name=unruffled_mcnulty, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 18:18:58 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2975182773' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 18:18:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-657ee8b255f4ac1e7ec4396e02a2065ee59f367fc8b7243cd6e152a320d4b783-merged.mount: Deactivated successfully.
Nov 24 18:18:58 compute-0 podman[75421]: 2025-11-24 18:18:58.651889469 +0000 UTC m=+0.878005087 container remove 480acad95ab7e59f7f8eada69aa4598da5533bbe4d360ea506dd106ed9114bb0 (image=quay.io/ceph/ceph:v18, name=unruffled_mcnulty, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:18:58 compute-0 systemd[1]: libpod-conmon-480acad95ab7e59f7f8eada69aa4598da5533bbe4d360ea506dd106ed9114bb0.scope: Deactivated successfully.
Nov 24 18:18:58 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'nfs'
Nov 24 18:18:59 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:18:59.524+0000 7f1ca3816140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 18:18:59 compute-0 ceph-mgr[75218]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 18:18:59 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'orchestrator'
Nov 24 18:19:00 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:00.185+0000 7f1ca3816140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 18:19:00 compute-0 ceph-mgr[75218]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 18:19:00 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'osd_perf_query'
Nov 24 18:19:00 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:00.449+0000 7f1ca3816140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 18:19:00 compute-0 ceph-mgr[75218]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 18:19:00 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'osd_support'
Nov 24 18:19:00 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:00.676+0000 7f1ca3816140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 18:19:00 compute-0 ceph-mgr[75218]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 18:19:00 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'pg_autoscaler'
Nov 24 18:19:00 compute-0 podman[75475]: 2025-11-24 18:19:00.728391592 +0000 UTC m=+0.046162856 container create 6b7820ef2f88a0d6590e0b889a334bc53726e9240a8ac455a853f3010af63442 (image=quay.io/ceph/ceph:v18, name=flamboyant_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Nov 24 18:19:00 compute-0 systemd[1]: Started libpod-conmon-6b7820ef2f88a0d6590e0b889a334bc53726e9240a8ac455a853f3010af63442.scope.
Nov 24 18:19:00 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:19:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/965679880d523cc9f4b24aaa61242bc24d5dc63c3bba3ab9394bf0cb3818998c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/965679880d523cc9f4b24aaa61242bc24d5dc63c3bba3ab9394bf0cb3818998c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/965679880d523cc9f4b24aaa61242bc24d5dc63c3bba3ab9394bf0cb3818998c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:00 compute-0 podman[75475]: 2025-11-24 18:19:00.712103828 +0000 UTC m=+0.029875102 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:19:00 compute-0 podman[75475]: 2025-11-24 18:19:00.828211728 +0000 UTC m=+0.145982982 container init 6b7820ef2f88a0d6590e0b889a334bc53726e9240a8ac455a853f3010af63442 (image=quay.io/ceph/ceph:v18, name=flamboyant_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:19:00 compute-0 podman[75475]: 2025-11-24 18:19:00.838440071 +0000 UTC m=+0.156211365 container start 6b7820ef2f88a0d6590e0b889a334bc53726e9240a8ac455a853f3010af63442 (image=quay.io/ceph/ceph:v18, name=flamboyant_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Nov 24 18:19:00 compute-0 podman[75475]: 2025-11-24 18:19:00.844952213 +0000 UTC m=+0.162723487 container attach 6b7820ef2f88a0d6590e0b889a334bc53726e9240a8ac455a853f3010af63442 (image=quay.io/ceph/ceph:v18, name=flamboyant_cannon, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:19:00 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:00.962+0000 7f1ca3816140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 18:19:00 compute-0 ceph-mgr[75218]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 18:19:00 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'progress'
Nov 24 18:19:01 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 24 18:19:01 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4211118639' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]: 
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]: {
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:     "fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:     "health": {
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:         "status": "HEALTH_OK",
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:         "checks": {},
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:         "mutes": []
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:     },
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:     "election_epoch": 5,
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:     "quorum": [
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:         0
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:     ],
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:     "quorum_names": [
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:         "compute-0"
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:     ],
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:     "quorum_age": 13,
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:     "monmap": {
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:         "epoch": 1,
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:         "min_mon_release_name": "reef",
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:         "num_mons": 1
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:     },
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:     "osdmap": {
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:         "epoch": 1,
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:         "num_osds": 0,
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:         "num_up_osds": 0,
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:         "osd_up_since": 0,
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:         "num_in_osds": 0,
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:         "osd_in_since": 0,
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:         "num_remapped_pgs": 0
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:     },
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:     "pgmap": {
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:         "pgs_by_state": [],
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:         "num_pgs": 0,
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:         "num_pools": 0,
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:         "num_objects": 0,
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:         "data_bytes": 0,
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:         "bytes_used": 0,
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:         "bytes_avail": 0,
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:         "bytes_total": 0
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:     },
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:     "fsmap": {
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:         "epoch": 1,
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:         "by_rank": [],
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:         "up:standby": 0
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:     },
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:     "mgrmap": {
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:         "available": false,
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:         "num_standbys": 0,
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:         "modules": [
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:             "iostat",
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:             "nfs",
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:             "restful"
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:         ],
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:         "services": {}
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:     },
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:     "servicemap": {
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:         "epoch": 1,
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:         "modified": "2025-11-24T18:18:44.978620+0000",
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:         "services": {}
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:     },
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]:     "progress_events": {}
Nov 24 18:19:01 compute-0 flamboyant_cannon[75491]: }
Nov 24 18:19:01 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:01.214+0000 7f1ca3816140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 18:19:01 compute-0 ceph-mgr[75218]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 18:19:01 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'prometheus'
Nov 24 18:19:01 compute-0 systemd[1]: libpod-6b7820ef2f88a0d6590e0b889a334bc53726e9240a8ac455a853f3010af63442.scope: Deactivated successfully.
Nov 24 18:19:01 compute-0 podman[75475]: 2025-11-24 18:19:01.218932719 +0000 UTC m=+0.536703973 container died 6b7820ef2f88a0d6590e0b889a334bc53726e9240a8ac455a853f3010af63442 (image=quay.io/ceph/ceph:v18, name=flamboyant_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 24 18:19:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-965679880d523cc9f4b24aaa61242bc24d5dc63c3bba3ab9394bf0cb3818998c-merged.mount: Deactivated successfully.
Nov 24 18:19:01 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/4211118639' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 18:19:01 compute-0 podman[75475]: 2025-11-24 18:19:01.257795542 +0000 UTC m=+0.575566796 container remove 6b7820ef2f88a0d6590e0b889a334bc53726e9240a8ac455a853f3010af63442 (image=quay.io/ceph/ceph:v18, name=flamboyant_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:19:01 compute-0 systemd[1]: libpod-conmon-6b7820ef2f88a0d6590e0b889a334bc53726e9240a8ac455a853f3010af63442.scope: Deactivated successfully.
Nov 24 18:19:02 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:02.316+0000 7f1ca3816140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 18:19:02 compute-0 ceph-mgr[75218]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 18:19:02 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'rbd_support'
Nov 24 18:19:02 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:02.626+0000 7f1ca3816140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 18:19:02 compute-0 ceph-mgr[75218]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 18:19:02 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'restful'
Nov 24 18:19:03 compute-0 podman[75530]: 2025-11-24 18:19:03.347111852 +0000 UTC m=+0.053230011 container create 462319ce44727e444dd548ac648362779a8d47a6db192788afdde5a8b7ac0819 (image=quay.io/ceph/ceph:v18, name=nifty_lamport, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 24 18:19:03 compute-0 systemd[1]: Started libpod-conmon-462319ce44727e444dd548ac648362779a8d47a6db192788afdde5a8b7ac0819.scope.
Nov 24 18:19:03 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:19:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8584d9b835296ce7c947453ea976fd00f5d7d81a0c15117b4a55dd36f4404e0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8584d9b835296ce7c947453ea976fd00f5d7d81a0c15117b4a55dd36f4404e0b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8584d9b835296ce7c947453ea976fd00f5d7d81a0c15117b4a55dd36f4404e0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:03 compute-0 podman[75530]: 2025-11-24 18:19:03.323622359 +0000 UTC m=+0.029740508 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:19:03 compute-0 podman[75530]: 2025-11-24 18:19:03.421348713 +0000 UTC m=+0.127466842 container init 462319ce44727e444dd548ac648362779a8d47a6db192788afdde5a8b7ac0819 (image=quay.io/ceph/ceph:v18, name=nifty_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:19:03 compute-0 podman[75530]: 2025-11-24 18:19:03.42808059 +0000 UTC m=+0.134198709 container start 462319ce44727e444dd548ac648362779a8d47a6db192788afdde5a8b7ac0819 (image=quay.io/ceph/ceph:v18, name=nifty_lamport, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 18:19:03 compute-0 podman[75530]: 2025-11-24 18:19:03.431277459 +0000 UTC m=+0.137395578 container attach 462319ce44727e444dd548ac648362779a8d47a6db192788afdde5a8b7ac0819 (image=quay.io/ceph/ceph:v18, name=nifty_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 18:19:03 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'rgw'
Nov 24 18:19:03 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 24 18:19:03 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2243591140' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 18:19:03 compute-0 nifty_lamport[75546]: 
Nov 24 18:19:03 compute-0 nifty_lamport[75546]: {
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:     "fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:     "health": {
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:         "status": "HEALTH_OK",
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:         "checks": {},
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:         "mutes": []
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:     },
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:     "election_epoch": 5,
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:     "quorum": [
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:         0
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:     ],
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:     "quorum_names": [
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:         "compute-0"
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:     ],
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:     "quorum_age": 16,
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:     "monmap": {
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:         "epoch": 1,
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:         "min_mon_release_name": "reef",
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:         "num_mons": 1
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:     },
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:     "osdmap": {
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:         "epoch": 1,
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:         "num_osds": 0,
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:         "num_up_osds": 0,
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:         "osd_up_since": 0,
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:         "num_in_osds": 0,
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:         "osd_in_since": 0,
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:         "num_remapped_pgs": 0
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:     },
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:     "pgmap": {
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:         "pgs_by_state": [],
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:         "num_pgs": 0,
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:         "num_pools": 0,
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:         "num_objects": 0,
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:         "data_bytes": 0,
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:         "bytes_used": 0,
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:         "bytes_avail": 0,
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:         "bytes_total": 0
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:     },
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:     "fsmap": {
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:         "epoch": 1,
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:         "by_rank": [],
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:         "up:standby": 0
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:     },
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:     "mgrmap": {
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:         "available": false,
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:         "num_standbys": 0,
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:         "modules": [
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:             "iostat",
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:             "nfs",
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:             "restful"
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:         ],
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:         "services": {}
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:     },
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:     "servicemap": {
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:         "epoch": 1,
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:         "modified": "2025-11-24T18:18:44.978620+0000",
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:         "services": {}
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:     },
Nov 24 18:19:03 compute-0 nifty_lamport[75546]:     "progress_events": {}
Nov 24 18:19:03 compute-0 nifty_lamport[75546]: }
Nov 24 18:19:03 compute-0 systemd[1]: libpod-462319ce44727e444dd548ac648362779a8d47a6db192788afdde5a8b7ac0819.scope: Deactivated successfully.
Nov 24 18:19:03 compute-0 podman[75530]: 2025-11-24 18:19:03.819861467 +0000 UTC m=+0.525979586 container died 462319ce44727e444dd548ac648362779a8d47a6db192788afdde5a8b7ac0819 (image=quay.io/ceph/ceph:v18, name=nifty_lamport, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:19:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-8584d9b835296ce7c947453ea976fd00f5d7d81a0c15117b4a55dd36f4404e0b-merged.mount: Deactivated successfully.
Nov 24 18:19:03 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2243591140' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 18:19:03 compute-0 podman[75530]: 2025-11-24 18:19:03.863610172 +0000 UTC m=+0.569728321 container remove 462319ce44727e444dd548ac648362779a8d47a6db192788afdde5a8b7ac0819 (image=quay.io/ceph/ceph:v18, name=nifty_lamport, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:19:03 compute-0 systemd[1]: libpod-conmon-462319ce44727e444dd548ac648362779a8d47a6db192788afdde5a8b7ac0819.scope: Deactivated successfully.
Nov 24 18:19:04 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:04.239+0000 7f1ca3816140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 18:19:04 compute-0 ceph-mgr[75218]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 18:19:04 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'rook'
Nov 24 18:19:05 compute-0 podman[75586]: 2025-11-24 18:19:05.923283127 +0000 UTC m=+0.039548762 container create b8625a796ff8e7a96d64692400a427fde1baa745731ee9bae6ba6aea87fbd6a0 (image=quay.io/ceph/ceph:v18, name=interesting_torvalds, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 24 18:19:05 compute-0 systemd[1]: Started libpod-conmon-b8625a796ff8e7a96d64692400a427fde1baa745731ee9bae6ba6aea87fbd6a0.scope.
Nov 24 18:19:05 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:19:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54f6e2a144b277419f749c6747eb54ee57afc0c39e1f37190e5fcca30c686473/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54f6e2a144b277419f749c6747eb54ee57afc0c39e1f37190e5fcca30c686473/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54f6e2a144b277419f749c6747eb54ee57afc0c39e1f37190e5fcca30c686473/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:06 compute-0 podman[75586]: 2025-11-24 18:19:05.904771958 +0000 UTC m=+0.021037613 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:19:06 compute-0 podman[75586]: 2025-11-24 18:19:06.014417927 +0000 UTC m=+0.130683662 container init b8625a796ff8e7a96d64692400a427fde1baa745731ee9bae6ba6aea87fbd6a0 (image=quay.io/ceph/ceph:v18, name=interesting_torvalds, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:19:06 compute-0 podman[75586]: 2025-11-24 18:19:06.019470112 +0000 UTC m=+0.135735757 container start b8625a796ff8e7a96d64692400a427fde1baa745731ee9bae6ba6aea87fbd6a0 (image=quay.io/ceph/ceph:v18, name=interesting_torvalds, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:19:06 compute-0 podman[75586]: 2025-11-24 18:19:06.023144394 +0000 UTC m=+0.139410069 container attach b8625a796ff8e7a96d64692400a427fde1baa745731ee9bae6ba6aea87fbd6a0 (image=quay.io/ceph/ceph:v18, name=interesting_torvalds, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 18:19:06 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:06.317+0000 7f1ca3816140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 18:19:06 compute-0 ceph-mgr[75218]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 18:19:06 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'selftest'
Nov 24 18:19:06 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 24 18:19:06 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/713541516' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]: 
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]: {
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:     "fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:     "health": {
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:         "status": "HEALTH_OK",
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:         "checks": {},
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:         "mutes": []
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:     },
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:     "election_epoch": 5,
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:     "quorum": [
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:         0
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:     ],
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:     "quorum_names": [
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:         "compute-0"
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:     ],
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:     "quorum_age": 18,
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:     "monmap": {
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:         "epoch": 1,
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:         "min_mon_release_name": "reef",
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:         "num_mons": 1
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:     },
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:     "osdmap": {
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:         "epoch": 1,
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:         "num_osds": 0,
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:         "num_up_osds": 0,
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:         "osd_up_since": 0,
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:         "num_in_osds": 0,
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:         "osd_in_since": 0,
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:         "num_remapped_pgs": 0
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:     },
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:     "pgmap": {
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:         "pgs_by_state": [],
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:         "num_pgs": 0,
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:         "num_pools": 0,
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:         "num_objects": 0,
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:         "data_bytes": 0,
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:         "bytes_used": 0,
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:         "bytes_avail": 0,
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:         "bytes_total": 0
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:     },
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:     "fsmap": {
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:         "epoch": 1,
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:         "by_rank": [],
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:         "up:standby": 0
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:     },
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:     "mgrmap": {
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:         "available": false,
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:         "num_standbys": 0,
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:         "modules": [
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:             "iostat",
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:             "nfs",
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:             "restful"
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:         ],
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:         "services": {}
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:     },
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:     "servicemap": {
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:         "epoch": 1,
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:         "modified": "2025-11-24T18:18:44.978620+0000",
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:         "services": {}
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:     },
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]:     "progress_events": {}
Nov 24 18:19:06 compute-0 interesting_torvalds[75603]: }
Nov 24 18:19:06 compute-0 systemd[1]: libpod-b8625a796ff8e7a96d64692400a427fde1baa745731ee9bae6ba6aea87fbd6a0.scope: Deactivated successfully.
Nov 24 18:19:06 compute-0 podman[75586]: 2025-11-24 18:19:06.438793423 +0000 UTC m=+0.555059088 container died b8625a796ff8e7a96d64692400a427fde1baa745731ee9bae6ba6aea87fbd6a0 (image=quay.io/ceph/ceph:v18, name=interesting_torvalds, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:19:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-54f6e2a144b277419f749c6747eb54ee57afc0c39e1f37190e5fcca30c686473-merged.mount: Deactivated successfully.
Nov 24 18:19:06 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/713541516' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 18:19:06 compute-0 podman[75586]: 2025-11-24 18:19:06.499267083 +0000 UTC m=+0.615532748 container remove b8625a796ff8e7a96d64692400a427fde1baa745731ee9bae6ba6aea87fbd6a0 (image=quay.io/ceph/ceph:v18, name=interesting_torvalds, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Nov 24 18:19:06 compute-0 systemd[1]: libpod-conmon-b8625a796ff8e7a96d64692400a427fde1baa745731ee9bae6ba6aea87fbd6a0.scope: Deactivated successfully.
Nov 24 18:19:06 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:06.576+0000 7f1ca3816140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 18:19:06 compute-0 ceph-mgr[75218]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 18:19:06 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'snap_schedule'
Nov 24 18:19:06 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:06.830+0000 7f1ca3816140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 18:19:06 compute-0 ceph-mgr[75218]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 18:19:06 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'stats'
Nov 24 18:19:07 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'status'
Nov 24 18:19:07 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:07.369+0000 7f1ca3816140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 18:19:07 compute-0 ceph-mgr[75218]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 18:19:07 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'telegraf'
Nov 24 18:19:07 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:07.618+0000 7f1ca3816140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 18:19:07 compute-0 ceph-mgr[75218]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 18:19:07 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'telemetry'
Nov 24 18:19:08 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:08.222+0000 7f1ca3816140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 18:19:08 compute-0 ceph-mgr[75218]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 18:19:08 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'test_orchestrator'
Nov 24 18:19:08 compute-0 podman[75641]: 2025-11-24 18:19:08.615173962 +0000 UTC m=+0.066489090 container create 90bb428af2e8435e2a7fb887b904a1f144d5709841a30deea9f639dbd6824065 (image=quay.io/ceph/ceph:v18, name=vigorous_cartwright, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 18:19:08 compute-0 systemd[1]: Started libpod-conmon-90bb428af2e8435e2a7fb887b904a1f144d5709841a30deea9f639dbd6824065.scope.
Nov 24 18:19:08 compute-0 podman[75641]: 2025-11-24 18:19:08.592581152 +0000 UTC m=+0.043896380 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:19:08 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:19:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23128c87f29dee7f0b7268971ac6dcb47ffb8fe66ae4da34308303e1d9dd57c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23128c87f29dee7f0b7268971ac6dcb47ffb8fe66ae4da34308303e1d9dd57c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23128c87f29dee7f0b7268971ac6dcb47ffb8fe66ae4da34308303e1d9dd57c8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:08 compute-0 podman[75641]: 2025-11-24 18:19:08.707634345 +0000 UTC m=+0.158949483 container init 90bb428af2e8435e2a7fb887b904a1f144d5709841a30deea9f639dbd6824065 (image=quay.io/ceph/ceph:v18, name=vigorous_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 24 18:19:08 compute-0 podman[75641]: 2025-11-24 18:19:08.72276427 +0000 UTC m=+0.174079408 container start 90bb428af2e8435e2a7fb887b904a1f144d5709841a30deea9f639dbd6824065 (image=quay.io/ceph/ceph:v18, name=vigorous_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 18:19:08 compute-0 podman[75641]: 2025-11-24 18:19:08.726587565 +0000 UTC m=+0.177902683 container attach 90bb428af2e8435e2a7fb887b904a1f144d5709841a30deea9f639dbd6824065 (image=quay.io/ceph/ceph:v18, name=vigorous_cartwright, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 24 18:19:08 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:08.934+0000 7f1ca3816140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 18:19:08 compute-0 ceph-mgr[75218]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 18:19:08 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'volumes'
Nov 24 18:19:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 24 18:19:09 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2857565935' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]: 
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]: {
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:     "fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:     "health": {
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:         "status": "HEALTH_OK",
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:         "checks": {},
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:         "mutes": []
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:     },
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:     "election_epoch": 5,
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:     "quorum": [
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:         0
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:     ],
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:     "quorum_names": [
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:         "compute-0"
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:     ],
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:     "quorum_age": 21,
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:     "monmap": {
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:         "epoch": 1,
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:         "min_mon_release_name": "reef",
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:         "num_mons": 1
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:     },
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:     "osdmap": {
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:         "epoch": 1,
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:         "num_osds": 0,
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:         "num_up_osds": 0,
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:         "osd_up_since": 0,
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:         "num_in_osds": 0,
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:         "osd_in_since": 0,
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:         "num_remapped_pgs": 0
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:     },
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:     "pgmap": {
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:         "pgs_by_state": [],
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:         "num_pgs": 0,
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:         "num_pools": 0,
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:         "num_objects": 0,
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:         "data_bytes": 0,
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:         "bytes_used": 0,
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:         "bytes_avail": 0,
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:         "bytes_total": 0
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:     },
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:     "fsmap": {
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:         "epoch": 1,
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:         "by_rank": [],
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:         "up:standby": 0
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:     },
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:     "mgrmap": {
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:         "available": false,
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:         "num_standbys": 0,
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:         "modules": [
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:             "iostat",
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:             "nfs",
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:             "restful"
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:         ],
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:         "services": {}
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:     },
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:     "servicemap": {
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:         "epoch": 1,
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:         "modified": "2025-11-24T18:18:44.978620+0000",
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:         "services": {}
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:     },
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]:     "progress_events": {}
Nov 24 18:19:09 compute-0 vigorous_cartwright[75658]: }
Nov 24 18:19:09 compute-0 systemd[1]: libpod-90bb428af2e8435e2a7fb887b904a1f144d5709841a30deea9f639dbd6824065.scope: Deactivated successfully.
Nov 24 18:19:09 compute-0 podman[75641]: 2025-11-24 18:19:09.094334056 +0000 UTC m=+0.545649174 container died 90bb428af2e8435e2a7fb887b904a1f144d5709841a30deea9f639dbd6824065 (image=quay.io/ceph/ceph:v18, name=vigorous_cartwright, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 18:19:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-23128c87f29dee7f0b7268971ac6dcb47ffb8fe66ae4da34308303e1d9dd57c8-merged.mount: Deactivated successfully.
Nov 24 18:19:09 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2857565935' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 18:19:09 compute-0 podman[75641]: 2025-11-24 18:19:09.133408646 +0000 UTC m=+0.584723764 container remove 90bb428af2e8435e2a7fb887b904a1f144d5709841a30deea9f639dbd6824065 (image=quay.io/ceph/ceph:v18, name=vigorous_cartwright, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:19:09 compute-0 systemd[1]: libpod-conmon-90bb428af2e8435e2a7fb887b904a1f144d5709841a30deea9f639dbd6824065.scope: Deactivated successfully.
Nov 24 18:19:09 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:09.656+0000 7f1ca3816140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'zabbix'
Nov 24 18:19:09 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:09.909+0000 7f1ca3816140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: ms_deliver_dispatch: unhandled message 0x5607d49131e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 24 18:19:09 compute-0 ceph-mon[74927]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.dfqptp
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: mgr handle_mgr_map Activating!
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: mgr handle_mgr_map I am now activating
Nov 24 18:19:09 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.dfqptp(active, starting, since 0.0133003s)
Nov 24 18:19:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 24 18:19:09 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 24 18:19:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).mds e1 all = 1
Nov 24 18:19:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 24 18:19:09 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 18:19:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 24 18:19:09 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 24 18:19:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 24 18:19:09 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 24 18:19:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).mds e1 all = 1
Nov 24 18:19:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 24 18:19:09 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 18:19:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 24 18:19:09 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 24 18:19:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 24 18:19:09 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 24 18:19:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.dfqptp", "id": "compute-0.dfqptp"} v 0) v1
Nov 24 18:19:09 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mgr metadata", "who": "compute-0.dfqptp", "id": "compute-0.dfqptp"}]: dispatch
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: mgr load Constructed class from module: balancer
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: mgr load Constructed class from module: crash
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: [balancer INFO root] Starting
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: mgr load Constructed class from module: devicehealth
Nov 24 18:19:09 compute-0 ceph-mon[74927]: log_channel(cluster) log [INF] : Manager daemon compute-0.dfqptp is now available
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: [devicehealth INFO root] Starting
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:19:09
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: [balancer INFO root] No pools available
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: mgr load Constructed class from module: iostat
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: mgr load Constructed class from module: nfs
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: mgr load Constructed class from module: orchestrator
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: mgr load Constructed class from module: pg_autoscaler
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: mgr load Constructed class from module: progress
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: [progress INFO root] Loading...
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: [progress INFO root] No stored events to load
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: [progress INFO root] Loaded [] historic events
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: [progress INFO root] Loaded OSDMap, ready.
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: [rbd_support INFO root] recovery thread starting
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: [rbd_support INFO root] starting setup
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: mgr load Constructed class from module: rbd_support
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: mgr load Constructed class from module: restful
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: [restful INFO root] server_addr: :: server_port: 8003
Nov 24 18:19:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dfqptp/mirror_snapshot_schedule"} v 0) v1
Nov 24 18:19:09 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dfqptp/mirror_snapshot_schedule"}]: dispatch
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: [restful WARNING root] server not running: no certificate configured
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: mgr load Constructed class from module: status
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: mgr load Constructed class from module: telemetry
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 18:19:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: [rbd_support INFO root] PerfHandler: starting
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TaskHandler: starting
Nov 24 18:19:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dfqptp/trash_purge_schedule"} v 0) v1
Nov 24 18:19:09 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dfqptp/trash_purge_schedule"}]: dispatch
Nov 24 18:19:09 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:19:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: [rbd_support INFO root] setup complete
Nov 24 18:19:09 compute-0 ceph-mgr[75218]: mgr load Constructed class from module: volumes
Nov 24 18:19:09 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Nov 24 18:19:09 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:10 compute-0 ceph-mon[74927]: Activating manager daemon compute-0.dfqptp
Nov 24 18:19:10 compute-0 ceph-mon[74927]: mgrmap e2: compute-0.dfqptp(active, starting, since 0.0133003s)
Nov 24 18:19:10 compute-0 ceph-mon[74927]: from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 24 18:19:10 compute-0 ceph-mon[74927]: from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 18:19:10 compute-0 ceph-mon[74927]: from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 24 18:19:10 compute-0 ceph-mon[74927]: from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 24 18:19:10 compute-0 ceph-mon[74927]: from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 18:19:10 compute-0 ceph-mon[74927]: from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 24 18:19:10 compute-0 ceph-mon[74927]: from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 24 18:19:10 compute-0 ceph-mon[74927]: from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mgr metadata", "who": "compute-0.dfqptp", "id": "compute-0.dfqptp"}]: dispatch
Nov 24 18:19:10 compute-0 ceph-mon[74927]: Manager daemon compute-0.dfqptp is now available
Nov 24 18:19:10 compute-0 ceph-mon[74927]: from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dfqptp/mirror_snapshot_schedule"}]: dispatch
Nov 24 18:19:10 compute-0 ceph-mon[74927]: from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dfqptp/trash_purge_schedule"}]: dispatch
Nov 24 18:19:10 compute-0 ceph-mon[74927]: from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:10 compute-0 ceph-mon[74927]: from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:10 compute-0 ceph-mon[74927]: from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:10 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.dfqptp(active, since 1.02358s)
Nov 24 18:19:11 compute-0 podman[75777]: 2025-11-24 18:19:11.203663863 +0000 UTC m=+0.048251087 container create 52b662b0657de1b3508652588350165f56662564c26bc41e27e1fc68515c8437 (image=quay.io/ceph/ceph:v18, name=dazzling_joliot, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 24 18:19:11 compute-0 systemd[1]: Started libpod-conmon-52b662b0657de1b3508652588350165f56662564c26bc41e27e1fc68515c8437.scope.
Nov 24 18:19:11 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:19:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40b2c0d8de39642fa263005e93be1650bf70b7fc43240c4a1e52772bdc0fb6dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40b2c0d8de39642fa263005e93be1650bf70b7fc43240c4a1e52772bdc0fb6dd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40b2c0d8de39642fa263005e93be1650bf70b7fc43240c4a1e52772bdc0fb6dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:11 compute-0 podman[75777]: 2025-11-24 18:19:11.178607242 +0000 UTC m=+0.023194576 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:19:11 compute-0 podman[75777]: 2025-11-24 18:19:11.289669466 +0000 UTC m=+0.134256770 container init 52b662b0657de1b3508652588350165f56662564c26bc41e27e1fc68515c8437 (image=quay.io/ceph/ceph:v18, name=dazzling_joliot, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 18:19:11 compute-0 podman[75777]: 2025-11-24 18:19:11.295262095 +0000 UTC m=+0.139849329 container start 52b662b0657de1b3508652588350165f56662564c26bc41e27e1fc68515c8437 (image=quay.io/ceph/ceph:v18, name=dazzling_joliot, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 24 18:19:11 compute-0 podman[75777]: 2025-11-24 18:19:11.299638224 +0000 UTC m=+0.144225508 container attach 52b662b0657de1b3508652588350165f56662564c26bc41e27e1fc68515c8437 (image=quay.io/ceph/ceph:v18, name=dazzling_joliot, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:19:11 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 24 18:19:11 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3201577320' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]: 
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]: {
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:     "fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:     "health": {
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:         "status": "HEALTH_OK",
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:         "checks": {},
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:         "mutes": []
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:     },
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:     "election_epoch": 5,
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:     "quorum": [
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:         0
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:     ],
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:     "quorum_names": [
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:         "compute-0"
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:     ],
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:     "quorum_age": 24,
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:     "monmap": {
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:         "epoch": 1,
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:         "min_mon_release_name": "reef",
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:         "num_mons": 1
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:     },
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:     "osdmap": {
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:         "epoch": 1,
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:         "num_osds": 0,
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:         "num_up_osds": 0,
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:         "osd_up_since": 0,
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:         "num_in_osds": 0,
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:         "osd_in_since": 0,
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:         "num_remapped_pgs": 0
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:     },
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:     "pgmap": {
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:         "pgs_by_state": [],
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:         "num_pgs": 0,
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:         "num_pools": 0,
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:         "num_objects": 0,
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:         "data_bytes": 0,
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:         "bytes_used": 0,
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:         "bytes_avail": 0,
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:         "bytes_total": 0
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:     },
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:     "fsmap": {
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:         "epoch": 1,
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:         "by_rank": [],
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:         "up:standby": 0
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:     },
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:     "mgrmap": {
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:         "available": true,
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:         "num_standbys": 0,
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:         "modules": [
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:             "iostat",
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:             "nfs",
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:             "restful"
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:         ],
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:         "services": {}
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:     },
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:     "servicemap": {
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:         "epoch": 1,
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:         "modified": "2025-11-24T18:18:44.978620+0000",
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:         "services": {}
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:     },
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]:     "progress_events": {}
Nov 24 18:19:11 compute-0 dazzling_joliot[75794]: }
Nov 24 18:19:11 compute-0 systemd[1]: libpod-52b662b0657de1b3508652588350165f56662564c26bc41e27e1fc68515c8437.scope: Deactivated successfully.
Nov 24 18:19:11 compute-0 ceph-mgr[75218]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 18:19:11 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.dfqptp(active, since 2s)
Nov 24 18:19:11 compute-0 ceph-mon[74927]: mgrmap e3: compute-0.dfqptp(active, since 1.02358s)
Nov 24 18:19:11 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3201577320' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 18:19:11 compute-0 podman[75820]: 2025-11-24 18:19:11.959533811 +0000 UTC m=+0.032170959 container died 52b662b0657de1b3508652588350165f56662564c26bc41e27e1fc68515c8437 (image=quay.io/ceph/ceph:v18, name=dazzling_joliot, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:19:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-40b2c0d8de39642fa263005e93be1650bf70b7fc43240c4a1e52772bdc0fb6dd-merged.mount: Deactivated successfully.
Nov 24 18:19:12 compute-0 podman[75820]: 2025-11-24 18:19:12.000833305 +0000 UTC m=+0.073470363 container remove 52b662b0657de1b3508652588350165f56662564c26bc41e27e1fc68515c8437 (image=quay.io/ceph/ceph:v18, name=dazzling_joliot, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:19:12 compute-0 systemd[1]: libpod-conmon-52b662b0657de1b3508652588350165f56662564c26bc41e27e1fc68515c8437.scope: Deactivated successfully.
Nov 24 18:19:12 compute-0 podman[75836]: 2025-11-24 18:19:12.094884838 +0000 UTC m=+0.056188115 container create c171c0c312a9e9a2173e422759e17927710e1a24c3758b3a877e11a23cc4a8a5 (image=quay.io/ceph/ceph:v18, name=interesting_pare, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:19:12 compute-0 systemd[1]: Started libpod-conmon-c171c0c312a9e9a2173e422759e17927710e1a24c3758b3a877e11a23cc4a8a5.scope.
Nov 24 18:19:12 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:19:12 compute-0 podman[75836]: 2025-11-24 18:19:12.070619946 +0000 UTC m=+0.031923243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:19:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64363857c0abca10fb1fa43f8ab32cc62643c58aa5adc7234c327bd13f5fbc2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64363857c0abca10fb1fa43f8ab32cc62643c58aa5adc7234c327bd13f5fbc2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64363857c0abca10fb1fa43f8ab32cc62643c58aa5adc7234c327bd13f5fbc2b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64363857c0abca10fb1fa43f8ab32cc62643c58aa5adc7234c327bd13f5fbc2b/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:12 compute-0 podman[75836]: 2025-11-24 18:19:12.185141545 +0000 UTC m=+0.146444872 container init c171c0c312a9e9a2173e422759e17927710e1a24c3758b3a877e11a23cc4a8a5 (image=quay.io/ceph/ceph:v18, name=interesting_pare, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:19:12 compute-0 podman[75836]: 2025-11-24 18:19:12.192371225 +0000 UTC m=+0.153674482 container start c171c0c312a9e9a2173e422759e17927710e1a24c3758b3a877e11a23cc4a8a5 (image=quay.io/ceph/ceph:v18, name=interesting_pare, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 18:19:12 compute-0 podman[75836]: 2025-11-24 18:19:12.197119753 +0000 UTC m=+0.158423100 container attach c171c0c312a9e9a2173e422759e17927710e1a24c3758b3a877e11a23cc4a8a5 (image=quay.io/ceph/ceph:v18, name=interesting_pare, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 24 18:19:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 24 18:19:12 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4208384906' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 24 18:19:12 compute-0 systemd[1]: libpod-c171c0c312a9e9a2173e422759e17927710e1a24c3758b3a877e11a23cc4a8a5.scope: Deactivated successfully.
Nov 24 18:19:12 compute-0 podman[75836]: 2025-11-24 18:19:12.767842318 +0000 UTC m=+0.729145565 container died c171c0c312a9e9a2173e422759e17927710e1a24c3758b3a877e11a23cc4a8a5 (image=quay.io/ceph/ceph:v18, name=interesting_pare, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 24 18:19:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-64363857c0abca10fb1fa43f8ab32cc62643c58aa5adc7234c327bd13f5fbc2b-merged.mount: Deactivated successfully.
Nov 24 18:19:12 compute-0 podman[75836]: 2025-11-24 18:19:12.813329736 +0000 UTC m=+0.774632983 container remove c171c0c312a9e9a2173e422759e17927710e1a24c3758b3a877e11a23cc4a8a5 (image=quay.io/ceph/ceph:v18, name=interesting_pare, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:19:12 compute-0 systemd[1]: libpod-conmon-c171c0c312a9e9a2173e422759e17927710e1a24c3758b3a877e11a23cc4a8a5.scope: Deactivated successfully.
Nov 24 18:19:12 compute-0 podman[75892]: 2025-11-24 18:19:12.866268259 +0000 UTC m=+0.035546243 container create 2026305d2f8d212d814a37efa294ba71090eaa3cd73e3d08c5b57e362f152f6d (image=quay.io/ceph/ceph:v18, name=romantic_swirles, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 18:19:12 compute-0 systemd[1]: Started libpod-conmon-2026305d2f8d212d814a37efa294ba71090eaa3cd73e3d08c5b57e362f152f6d.scope.
Nov 24 18:19:12 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:19:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68b17fef3c193a71381bd813ab5cf7bcad2a6a1d0645b02622c816f0be19cc5f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68b17fef3c193a71381bd813ab5cf7bcad2a6a1d0645b02622c816f0be19cc5f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68b17fef3c193a71381bd813ab5cf7bcad2a6a1d0645b02622c816f0be19cc5f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:12 compute-0 podman[75892]: 2025-11-24 18:19:12.933465636 +0000 UTC m=+0.102743700 container init 2026305d2f8d212d814a37efa294ba71090eaa3cd73e3d08c5b57e362f152f6d (image=quay.io/ceph/ceph:v18, name=romantic_swirles, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 18:19:12 compute-0 podman[75892]: 2025-11-24 18:19:12.939812643 +0000 UTC m=+0.109090657 container start 2026305d2f8d212d814a37efa294ba71090eaa3cd73e3d08c5b57e362f152f6d (image=quay.io/ceph/ceph:v18, name=romantic_swirles, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 24 18:19:12 compute-0 podman[75892]: 2025-11-24 18:19:12.943483014 +0000 UTC m=+0.112761008 container attach 2026305d2f8d212d814a37efa294ba71090eaa3cd73e3d08c5b57e362f152f6d (image=quay.io/ceph/ceph:v18, name=romantic_swirles, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:19:12 compute-0 podman[75892]: 2025-11-24 18:19:12.850033666 +0000 UTC m=+0.019311670 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:19:12 compute-0 ceph-mon[74927]: mgrmap e4: compute-0.dfqptp(active, since 2s)
Nov 24 18:19:12 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/4208384906' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 24 18:19:13 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Nov 24 18:19:13 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2356186078' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 24 18:19:13 compute-0 ceph-mgr[75218]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 18:19:13 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2356186078' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 24 18:19:13 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2356186078' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 24 18:19:13 compute-0 ceph-mgr[75218]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 24 18:19:13 compute-0 ceph-mgr[75218]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 24 18:19:13 compute-0 ceph-mgr[75218]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 24 18:19:13 compute-0 ceph-mgr[75218]: mgr respawn  1: '-n'
Nov 24 18:19:13 compute-0 ceph-mgr[75218]: mgr respawn  2: 'mgr.compute-0.dfqptp'
Nov 24 18:19:13 compute-0 ceph-mgr[75218]: mgr respawn  3: '-f'
Nov 24 18:19:13 compute-0 ceph-mgr[75218]: mgr respawn  4: '--setuser'
Nov 24 18:19:13 compute-0 ceph-mgr[75218]: mgr respawn  5: 'ceph'
Nov 24 18:19:13 compute-0 ceph-mgr[75218]: mgr respawn  6: '--setgroup'
Nov 24 18:19:13 compute-0 ceph-mgr[75218]: mgr respawn  7: 'ceph'
Nov 24 18:19:13 compute-0 ceph-mgr[75218]: mgr respawn  8: '--default-log-to-file=false'
Nov 24 18:19:13 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.dfqptp(active, since 4s)
Nov 24 18:19:13 compute-0 systemd[1]: libpod-2026305d2f8d212d814a37efa294ba71090eaa3cd73e3d08c5b57e362f152f6d.scope: Deactivated successfully.
Nov 24 18:19:13 compute-0 podman[75892]: 2025-11-24 18:19:13.991327324 +0000 UTC m=+1.160605308 container died 2026305d2f8d212d814a37efa294ba71090eaa3cd73e3d08c5b57e362f152f6d (image=quay.io/ceph/ceph:v18, name=romantic_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 24 18:19:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-68b17fef3c193a71381bd813ab5cf7bcad2a6a1d0645b02622c816f0be19cc5f-merged.mount: Deactivated successfully.
Nov 24 18:19:14 compute-0 podman[75892]: 2025-11-24 18:19:14.034276739 +0000 UTC m=+1.203554713 container remove 2026305d2f8d212d814a37efa294ba71090eaa3cd73e3d08c5b57e362f152f6d (image=quay.io/ceph/ceph:v18, name=romantic_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:19:14 compute-0 systemd[1]: libpod-conmon-2026305d2f8d212d814a37efa294ba71090eaa3cd73e3d08c5b57e362f152f6d.scope: Deactivated successfully.
Nov 24 18:19:14 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: ignoring --setuser ceph since I am not root
Nov 24 18:19:14 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: ignoring --setgroup ceph since I am not root
Nov 24 18:19:14 compute-0 ceph-mgr[75218]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 24 18:19:14 compute-0 ceph-mgr[75218]: pidfile_write: ignore empty --pid-file
Nov 24 18:19:14 compute-0 podman[75946]: 2025-11-24 18:19:14.120666581 +0000 UTC m=+0.060237255 container create c4c8ffc765f8ef1bc8cd4d7fbc6134179c6e715ab40c5a57b681d7cf171e2802 (image=quay.io/ceph/ceph:v18, name=flamboyant_mirzakhani, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 24 18:19:14 compute-0 systemd[1]: Started libpod-conmon-c4c8ffc765f8ef1bc8cd4d7fbc6134179c6e715ab40c5a57b681d7cf171e2802.scope.
Nov 24 18:19:14 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'alerts'
Nov 24 18:19:14 compute-0 podman[75946]: 2025-11-24 18:19:14.100232124 +0000 UTC m=+0.039802778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:19:14 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:19:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20cb8028bb67275ee77bfeb39bb003303ae97c5f326f0bf2246a3c9d44a775f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20cb8028bb67275ee77bfeb39bb003303ae97c5f326f0bf2246a3c9d44a775f6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20cb8028bb67275ee77bfeb39bb003303ae97c5f326f0bf2246a3c9d44a775f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:14 compute-0 podman[75946]: 2025-11-24 18:19:14.356991953 +0000 UTC m=+0.296562667 container init c4c8ffc765f8ef1bc8cd4d7fbc6134179c6e715ab40c5a57b681d7cf171e2802 (image=quay.io/ceph/ceph:v18, name=flamboyant_mirzakhani, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:19:14 compute-0 podman[75946]: 2025-11-24 18:19:14.36210083 +0000 UTC m=+0.301671494 container start c4c8ffc765f8ef1bc8cd4d7fbc6134179c6e715ab40c5a57b681d7cf171e2802 (image=quay.io/ceph/ceph:v18, name=flamboyant_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 24 18:19:14 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:14.492+0000 7f63da190140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 18:19:14 compute-0 ceph-mgr[75218]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 18:19:14 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'balancer'
Nov 24 18:19:14 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:14.748+0000 7f63da190140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 18:19:14 compute-0 ceph-mgr[75218]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 18:19:14 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'cephadm'
Nov 24 18:19:14 compute-0 podman[75946]: 2025-11-24 18:19:14.796072863 +0000 UTC m=+0.735643497 container attach c4c8ffc765f8ef1bc8cd4d7fbc6134179c6e715ab40c5a57b681d7cf171e2802 (image=quay.io/ceph/ceph:v18, name=flamboyant_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:19:15 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 24 18:19:15 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2214343238' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 24 18:19:15 compute-0 flamboyant_mirzakhani[75987]: {
Nov 24 18:19:15 compute-0 flamboyant_mirzakhani[75987]:     "epoch": 5,
Nov 24 18:19:15 compute-0 flamboyant_mirzakhani[75987]:     "available": true,
Nov 24 18:19:15 compute-0 flamboyant_mirzakhani[75987]:     "active_name": "compute-0.dfqptp",
Nov 24 18:19:15 compute-0 flamboyant_mirzakhani[75987]:     "num_standby": 0
Nov 24 18:19:15 compute-0 flamboyant_mirzakhani[75987]: }
Nov 24 18:19:15 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2356186078' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 24 18:19:15 compute-0 ceph-mon[74927]: mgrmap e5: compute-0.dfqptp(active, since 4s)
Nov 24 18:19:15 compute-0 systemd[1]: libpod-c4c8ffc765f8ef1bc8cd4d7fbc6134179c6e715ab40c5a57b681d7cf171e2802.scope: Deactivated successfully.
Nov 24 18:19:15 compute-0 conmon[75987]: conmon c4c8ffc765f8ef1bc8cd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c4c8ffc765f8ef1bc8cd4d7fbc6134179c6e715ab40c5a57b681d7cf171e2802.scope/container/memory.events
Nov 24 18:19:15 compute-0 podman[75946]: 2025-11-24 18:19:15.073560536 +0000 UTC m=+1.013131200 container died c4c8ffc765f8ef1bc8cd4d7fbc6134179c6e715ab40c5a57b681d7cf171e2802 (image=quay.io/ceph/ceph:v18, name=flamboyant_mirzakhani, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:19:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-20cb8028bb67275ee77bfeb39bb003303ae97c5f326f0bf2246a3c9d44a775f6-merged.mount: Deactivated successfully.
Nov 24 18:19:15 compute-0 podman[75946]: 2025-11-24 18:19:15.638439516 +0000 UTC m=+1.578010190 container remove c4c8ffc765f8ef1bc8cd4d7fbc6134179c6e715ab40c5a57b681d7cf171e2802 (image=quay.io/ceph/ceph:v18, name=flamboyant_mirzakhani, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:19:15 compute-0 systemd[1]: libpod-conmon-c4c8ffc765f8ef1bc8cd4d7fbc6134179c6e715ab40c5a57b681d7cf171e2802.scope: Deactivated successfully.
Nov 24 18:19:15 compute-0 podman[76027]: 2025-11-24 18:19:15.728533 +0000 UTC m=+0.056572844 container create faa2fc1178624312db21ad328e02f4dd04a21a9a60372dde37537f05b8e68348 (image=quay.io/ceph/ceph:v18, name=kind_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 18:19:15 compute-0 systemd[1]: Started libpod-conmon-faa2fc1178624312db21ad328e02f4dd04a21a9a60372dde37537f05b8e68348.scope.
Nov 24 18:19:15 compute-0 podman[76027]: 2025-11-24 18:19:15.707913428 +0000 UTC m=+0.035953252 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:19:15 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:19:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e10c2fad4c5182967e646963c4508b811071d37d9f08d88ef29d5ecec3f51427/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e10c2fad4c5182967e646963c4508b811071d37d9f08d88ef29d5ecec3f51427/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e10c2fad4c5182967e646963c4508b811071d37d9f08d88ef29d5ecec3f51427/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:15 compute-0 podman[76027]: 2025-11-24 18:19:15.853804637 +0000 UTC m=+0.181844501 container init faa2fc1178624312db21ad328e02f4dd04a21a9a60372dde37537f05b8e68348 (image=quay.io/ceph/ceph:v18, name=kind_haibt, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:19:15 compute-0 podman[76027]: 2025-11-24 18:19:15.863565449 +0000 UTC m=+0.191605283 container start faa2fc1178624312db21ad328e02f4dd04a21a9a60372dde37537f05b8e68348 (image=quay.io/ceph/ceph:v18, name=kind_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 18:19:15 compute-0 podman[76027]: 2025-11-24 18:19:15.868712576 +0000 UTC m=+0.196752410 container attach faa2fc1178624312db21ad328e02f4dd04a21a9a60372dde37537f05b8e68348 (image=quay.io/ceph/ceph:v18, name=kind_haibt, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 18:19:16 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2214343238' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 24 18:19:16 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'crash'
Nov 24 18:19:17 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:17.081+0000 7f63da190140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 18:19:17 compute-0 ceph-mgr[75218]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 18:19:17 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'dashboard'
Nov 24 18:19:18 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'devicehealth'
Nov 24 18:19:18 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:18.849+0000 7f63da190140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 18:19:18 compute-0 ceph-mgr[75218]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 18:19:18 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'diskprediction_local'
Nov 24 18:19:19 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 24 18:19:19 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 24 18:19:19 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]:   from numpy import show_config as show_numpy_config
Nov 24 18:19:19 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:19.375+0000 7f63da190140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 18:19:19 compute-0 ceph-mgr[75218]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 18:19:19 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'influx'
Nov 24 18:19:19 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:19.607+0000 7f63da190140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 18:19:19 compute-0 ceph-mgr[75218]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 18:19:19 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'insights'
Nov 24 18:19:19 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'iostat'
Nov 24 18:19:20 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:20.088+0000 7f63da190140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 18:19:20 compute-0 ceph-mgr[75218]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 18:19:20 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'k8sevents'
Nov 24 18:19:21 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'localpool'
Nov 24 18:19:22 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'mds_autoscaler'
Nov 24 18:19:22 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'mirroring'
Nov 24 18:19:23 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'nfs'
Nov 24 18:19:23 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:23.776+0000 7f63da190140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 18:19:23 compute-0 ceph-mgr[75218]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 18:19:23 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'orchestrator'
Nov 24 18:19:24 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:24.485+0000 7f63da190140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 18:19:24 compute-0 ceph-mgr[75218]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 18:19:24 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'osd_perf_query'
Nov 24 18:19:24 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:24.767+0000 7f63da190140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 18:19:24 compute-0 ceph-mgr[75218]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 18:19:24 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'osd_support'
Nov 24 18:19:24 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:24.998+0000 7f63da190140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 18:19:24 compute-0 ceph-mgr[75218]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 18:19:24 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'pg_autoscaler'
Nov 24 18:19:25 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:25.263+0000 7f63da190140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 18:19:25 compute-0 ceph-mgr[75218]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 18:19:25 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'progress'
Nov 24 18:19:25 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:25.530+0000 7f63da190140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 18:19:25 compute-0 ceph-mgr[75218]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 18:19:25 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'prometheus'
Nov 24 18:19:26 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:26.560+0000 7f63da190140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 18:19:26 compute-0 ceph-mgr[75218]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 18:19:26 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'rbd_support'
Nov 24 18:19:26 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:26.865+0000 7f63da190140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 18:19:26 compute-0 ceph-mgr[75218]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 18:19:26 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'restful'
Nov 24 18:19:27 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'rgw'
Nov 24 18:19:28 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:28.247+0000 7f63da190140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 18:19:28 compute-0 ceph-mgr[75218]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 18:19:28 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'rook'
Nov 24 18:19:30 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:30.340+0000 7f63da190140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 18:19:30 compute-0 ceph-mgr[75218]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 18:19:30 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'selftest'
Nov 24 18:19:30 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:30.625+0000 7f63da190140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 18:19:30 compute-0 ceph-mgr[75218]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 18:19:30 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'snap_schedule'
Nov 24 18:19:30 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:30.907+0000 7f63da190140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 18:19:30 compute-0 ceph-mgr[75218]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 18:19:30 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'stats'
Nov 24 18:19:31 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'status'
Nov 24 18:19:31 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:31.450+0000 7f63da190140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 18:19:31 compute-0 ceph-mgr[75218]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 18:19:31 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'telegraf'
Nov 24 18:19:31 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:31.714+0000 7f63da190140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 18:19:31 compute-0 ceph-mgr[75218]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 18:19:31 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'telemetry'
Nov 24 18:19:32 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:32.363+0000 7f63da190140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 18:19:32 compute-0 ceph-mgr[75218]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 18:19:32 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'test_orchestrator'
Nov 24 18:19:33 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:33.044+0000 7f63da190140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 18:19:33 compute-0 ceph-mgr[75218]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 18:19:33 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'volumes'
Nov 24 18:19:33 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:33.790+0000 7f63da190140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 18:19:33 compute-0 ceph-mgr[75218]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 18:19:33 compute-0 ceph-mgr[75218]: mgr[py] Loading python module 'zabbix'
Nov 24 18:19:34 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:34.064+0000 7f63da190140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 18:19:34 compute-0 ceph-mon[74927]: log_channel(cluster) log [INF] : Active manager daemon compute-0.dfqptp restarted
Nov 24 18:19:34 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Nov 24 18:19:34 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 18:19:34 compute-0 ceph-mon[74927]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.dfqptp
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: ms_deliver_dispatch: unhandled message 0x563ff52331e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 24 18:19:34 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 24 18:19:34 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 24 18:19:34 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: mgr handle_mgr_map Activating!
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: mgr handle_mgr_map I am now activating
Nov 24 18:19:34 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Nov 24 18:19:34 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.dfqptp(active, starting, since 0.349364s)
Nov 24 18:19:34 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 24 18:19:34 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 24 18:19:34 compute-0 ceph-mon[74927]: Active manager daemon compute-0.dfqptp restarted
Nov 24 18:19:34 compute-0 ceph-mon[74927]: Activating manager daemon compute-0.dfqptp
Nov 24 18:19:34 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.dfqptp", "id": "compute-0.dfqptp"} v 0) v1
Nov 24 18:19:34 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mgr metadata", "who": "compute-0.dfqptp", "id": "compute-0.dfqptp"}]: dispatch
Nov 24 18:19:34 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 24 18:19:34 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 24 18:19:34 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).mds e1 all = 1
Nov 24 18:19:34 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 24 18:19:34 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 18:19:34 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 24 18:19:34 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: mgr load Constructed class from module: balancer
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Starting
Nov 24 18:19:34 compute-0 ceph-mon[74927]: log_channel(cluster) log [INF] : Manager daemon compute-0.dfqptp is now available
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:19:34
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: [balancer INFO root] No pools available
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Nov 24 18:19:34 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Nov 24 18:19:34 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:34 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Nov 24 18:19:34 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: mgr load Constructed class from module: cephadm
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: mgr load Constructed class from module: crash
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: mgr load Constructed class from module: devicehealth
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: mgr load Constructed class from module: iostat
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: [devicehealth INFO root] Starting
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: mgr load Constructed class from module: nfs
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: mgr load Constructed class from module: orchestrator
Nov 24 18:19:34 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 24 18:19:34 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: mgr load Constructed class from module: pg_autoscaler
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: mgr load Constructed class from module: progress
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: [progress INFO root] Loading...
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: [progress INFO root] No stored events to load
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: [progress INFO root] Loaded [] historic events
Nov 24 18:19:34 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 24 18:19:34 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: [progress INFO root] Loaded OSDMap, ready.
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] recovery thread starting
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] starting setup
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: mgr load Constructed class from module: rbd_support
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: mgr load Constructed class from module: restful
Nov 24 18:19:34 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dfqptp/mirror_snapshot_schedule"} v 0) v1
Nov 24 18:19:34 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dfqptp/mirror_snapshot_schedule"}]: dispatch
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: mgr load Constructed class from module: status
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: [restful INFO root] server_addr: :: server_port: 8003
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: mgr load Constructed class from module: telemetry
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] PerfHandler: starting
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TaskHandler: starting
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: [restful WARNING root] server not running: no certificate configured
Nov 24 18:19:34 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dfqptp/trash_purge_schedule"} v 0) v1
Nov 24 18:19:34 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dfqptp/trash_purge_schedule"}]: dispatch
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] setup complete
Nov 24 18:19:34 compute-0 ceph-mgr[75218]: mgr load Constructed class from module: volumes
Nov 24 18:19:35 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Nov 24 18:19:35 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:35 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Nov 24 18:19:35 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:35 compute-0 ceph-mon[74927]: osdmap e2: 0 total, 0 up, 0 in
Nov 24 18:19:35 compute-0 ceph-mon[74927]: mgrmap e6: compute-0.dfqptp(active, starting, since 0.349364s)
Nov 24 18:19:35 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 24 18:19:35 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mgr metadata", "who": "compute-0.dfqptp", "id": "compute-0.dfqptp"}]: dispatch
Nov 24 18:19:35 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 24 18:19:35 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 18:19:35 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 24 18:19:35 compute-0 ceph-mon[74927]: Manager daemon compute-0.dfqptp is now available
Nov 24 18:19:35 compute-0 ceph-mon[74927]: Found migration_current of "None". Setting to last migration.
Nov 24 18:19:35 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:35 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:35 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 18:19:35 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 18:19:35 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dfqptp/mirror_snapshot_schedule"}]: dispatch
Nov 24 18:19:35 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dfqptp/trash_purge_schedule"}]: dispatch
Nov 24 18:19:35 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:35 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:35 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 24 18:19:35 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.dfqptp(active, since 1.42919s)
Nov 24 18:19:35 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 24 18:19:35 compute-0 kind_haibt[76043]: {
Nov 24 18:19:35 compute-0 kind_haibt[76043]:     "mgrmap_epoch": 7,
Nov 24 18:19:35 compute-0 kind_haibt[76043]:     "initialized": true
Nov 24 18:19:35 compute-0 kind_haibt[76043]: }
Nov 24 18:19:35 compute-0 systemd[1]: libpod-faa2fc1178624312db21ad328e02f4dd04a21a9a60372dde37537f05b8e68348.scope: Deactivated successfully.
Nov 24 18:19:35 compute-0 podman[76027]: 2025-11-24 18:19:35.520225687 +0000 UTC m=+19.848265491 container died faa2fc1178624312db21ad328e02f4dd04a21a9a60372dde37537f05b8e68348 (image=quay.io/ceph/ceph:v18, name=kind_haibt, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 18:19:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-e10c2fad4c5182967e646963c4508b811071d37d9f08d88ef29d5ecec3f51427-merged.mount: Deactivated successfully.
Nov 24 18:19:35 compute-0 podman[76027]: 2025-11-24 18:19:35.629536007 +0000 UTC m=+19.957575821 container remove faa2fc1178624312db21ad328e02f4dd04a21a9a60372dde37537f05b8e68348 (image=quay.io/ceph/ceph:v18, name=kind_haibt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:19:35 compute-0 systemd[1]: libpod-conmon-faa2fc1178624312db21ad328e02f4dd04a21a9a60372dde37537f05b8e68348.scope: Deactivated successfully.
Nov 24 18:19:35 compute-0 podman[76202]: 2025-11-24 18:19:35.738848938 +0000 UTC m=+0.071133410 container create 2ba0f97702153dd7e3ffa694f04f0f3ff3dc1c6eee5650399ffae31e2f1ecd4c (image=quay.io/ceph/ceph:v18, name=intelligent_swartz, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:19:35 compute-0 systemd[1]: Started libpod-conmon-2ba0f97702153dd7e3ffa694f04f0f3ff3dc1c6eee5650399ffae31e2f1ecd4c.scope.
Nov 24 18:19:35 compute-0 podman[76202]: 2025-11-24 18:19:35.711371051 +0000 UTC m=+0.043655603 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:19:35 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:19:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6812407cdc80a2951de00efcafae98d3f2ef7db825d23d36b0ca1f50439027e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6812407cdc80a2951de00efcafae98d3f2ef7db825d23d36b0ca1f50439027e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6812407cdc80a2951de00efcafae98d3f2ef7db825d23d36b0ca1f50439027e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:35 compute-0 podman[76202]: 2025-11-24 18:19:35.858152025 +0000 UTC m=+0.190436537 container init 2ba0f97702153dd7e3ffa694f04f0f3ff3dc1c6eee5650399ffae31e2f1ecd4c (image=quay.io/ceph/ceph:v18, name=intelligent_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 24 18:19:35 compute-0 podman[76202]: 2025-11-24 18:19:35.865022181 +0000 UTC m=+0.197306653 container start 2ba0f97702153dd7e3ffa694f04f0f3ff3dc1c6eee5650399ffae31e2f1ecd4c (image=quay.io/ceph/ceph:v18, name=intelligent_swartz, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:19:35 compute-0 podman[76202]: 2025-11-24 18:19:35.874736091 +0000 UTC m=+0.207020603 container attach 2ba0f97702153dd7e3ffa694f04f0f3ff3dc1c6eee5650399ffae31e2f1ecd4c (image=quay.io/ceph/ceph:v18, name=intelligent_swartz, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:19:36 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:19:36 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Nov 24 18:19:36 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:36 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 24 18:19:36 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 18:19:36 compute-0 ceph-mgr[75218]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 18:19:36 compute-0 systemd[1]: libpod-2ba0f97702153dd7e3ffa694f04f0f3ff3dc1c6eee5650399ffae31e2f1ecd4c.scope: Deactivated successfully.
Nov 24 18:19:36 compute-0 podman[76202]: 2025-11-24 18:19:36.462350568 +0000 UTC m=+0.794635030 container died 2ba0f97702153dd7e3ffa694f04f0f3ff3dc1c6eee5650399ffae31e2f1ecd4c (image=quay.io/ceph/ceph:v18, name=intelligent_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:19:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6812407cdc80a2951de00efcafae98d3f2ef7db825d23d36b0ca1f50439027e-merged.mount: Deactivated successfully.
Nov 24 18:19:36 compute-0 ceph-mon[74927]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 24 18:19:36 compute-0 ceph-mon[74927]: mgrmap e7: compute-0.dfqptp(active, since 1.42919s)
Nov 24 18:19:36 compute-0 ceph-mon[74927]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 24 18:19:36 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:36 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 18:19:36 compute-0 podman[76202]: 2025-11-24 18:19:36.502402108 +0000 UTC m=+0.834686570 container remove 2ba0f97702153dd7e3ffa694f04f0f3ff3dc1c6eee5650399ffae31e2f1ecd4c (image=quay.io/ceph/ceph:v18, name=intelligent_swartz, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:19:36 compute-0 systemd[1]: libpod-conmon-2ba0f97702153dd7e3ffa694f04f0f3ff3dc1c6eee5650399ffae31e2f1ecd4c.scope: Deactivated successfully.
Nov 24 18:19:36 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.dfqptp(active, since 2s)
Nov 24 18:19:36 compute-0 ceph-mgr[75218]: [cephadm INFO cherrypy.error] [24/Nov/2025:18:19:36] ENGINE Bus STARTING
Nov 24 18:19:36 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : [24/Nov/2025:18:19:36] ENGINE Bus STARTING
Nov 24 18:19:36 compute-0 podman[76255]: 2025-11-24 18:19:36.628249203 +0000 UTC m=+0.104453166 container create f399ace9ec70427cc31f846cf309ff923c3c7885f6392c81d686b4f1ae6ced2d (image=quay.io/ceph/ceph:v18, name=nostalgic_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Nov 24 18:19:36 compute-0 podman[76255]: 2025-11-24 18:19:36.548658507 +0000 UTC m=+0.024862480 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:19:36 compute-0 ceph-mgr[75218]: [cephadm INFO cherrypy.error] [24/Nov/2025:18:19:36] ENGINE Serving on http://192.168.122.100:8765
Nov 24 18:19:36 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : [24/Nov/2025:18:19:36] ENGINE Serving on http://192.168.122.100:8765
Nov 24 18:19:36 compute-0 systemd[1]: Started libpod-conmon-f399ace9ec70427cc31f846cf309ff923c3c7885f6392c81d686b4f1ae6ced2d.scope.
Nov 24 18:19:36 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:19:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4388d472766a5fb4b5fccb512a0127dd8f20f7025e523751d03fb262ffdaad85/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4388d472766a5fb4b5fccb512a0127dd8f20f7025e523751d03fb262ffdaad85/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4388d472766a5fb4b5fccb512a0127dd8f20f7025e523751d03fb262ffdaad85/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:36 compute-0 ceph-mgr[75218]: [cephadm INFO cherrypy.error] [24/Nov/2025:18:19:36] ENGINE Serving on https://192.168.122.100:7150
Nov 24 18:19:36 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : [24/Nov/2025:18:19:36] ENGINE Serving on https://192.168.122.100:7150
Nov 24 18:19:36 compute-0 ceph-mgr[75218]: [cephadm INFO cherrypy.error] [24/Nov/2025:18:19:36] ENGINE Bus STARTED
Nov 24 18:19:36 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : [24/Nov/2025:18:19:36] ENGINE Bus STARTED
Nov 24 18:19:36 compute-0 ceph-mgr[75218]: [cephadm INFO cherrypy.error] [24/Nov/2025:18:19:36] ENGINE Client ('192.168.122.100', 47586) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 24 18:19:36 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 24 18:19:36 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : [24/Nov/2025:18:19:36] ENGINE Client ('192.168.122.100', 47586) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 24 18:19:36 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 18:19:36 compute-0 podman[76255]: 2025-11-24 18:19:36.823865152 +0000 UTC m=+0.300069115 container init f399ace9ec70427cc31f846cf309ff923c3c7885f6392c81d686b4f1ae6ced2d (image=quay.io/ceph/ceph:v18, name=nostalgic_goldstine, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:19:36 compute-0 podman[76255]: 2025-11-24 18:19:36.828160843 +0000 UTC m=+0.304364796 container start f399ace9ec70427cc31f846cf309ff923c3c7885f6392c81d686b4f1ae6ced2d (image=quay.io/ceph/ceph:v18, name=nostalgic_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 18:19:36 compute-0 podman[76255]: 2025-11-24 18:19:36.915446197 +0000 UTC m=+0.391650150 container attach f399ace9ec70427cc31f846cf309ff923c3c7885f6392c81d686b4f1ae6ced2d (image=quay.io/ceph/ceph:v18, name=nostalgic_goldstine, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 18:19:37 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:19:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Nov 24 18:19:37 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:37 compute-0 ceph-mgr[75218]: [cephadm INFO root] Set ssh ssh_user
Nov 24 18:19:37 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Nov 24 18:19:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Nov 24 18:19:37 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:37 compute-0 ceph-mgr[75218]: [cephadm INFO root] Set ssh ssh_config
Nov 24 18:19:37 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Nov 24 18:19:37 compute-0 ceph-mgr[75218]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Nov 24 18:19:37 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Nov 24 18:19:37 compute-0 nostalgic_goldstine[76294]: ssh user set to ceph-admin. sudo will be used
Nov 24 18:19:37 compute-0 systemd[1]: libpod-f399ace9ec70427cc31f846cf309ff923c3c7885f6392c81d686b4f1ae6ced2d.scope: Deactivated successfully.
Nov 24 18:19:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019923644 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:19:37 compute-0 podman[76322]: 2025-11-24 18:19:37.468078813 +0000 UTC m=+0.026633186 container died f399ace9ec70427cc31f846cf309ff923c3c7885f6392c81d686b4f1ae6ced2d (image=quay.io/ceph/ceph:v18, name=nostalgic_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Nov 24 18:19:37 compute-0 ceph-mon[74927]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:19:37 compute-0 ceph-mon[74927]: mgrmap e8: compute-0.dfqptp(active, since 2s)
Nov 24 18:19:37 compute-0 ceph-mon[74927]: [24/Nov/2025:18:19:36] ENGINE Bus STARTING
Nov 24 18:19:37 compute-0 ceph-mon[74927]: [24/Nov/2025:18:19:36] ENGINE Serving on http://192.168.122.100:8765
Nov 24 18:19:37 compute-0 ceph-mon[74927]: [24/Nov/2025:18:19:36] ENGINE Serving on https://192.168.122.100:7150
Nov 24 18:19:37 compute-0 ceph-mon[74927]: [24/Nov/2025:18:19:36] ENGINE Bus STARTED
Nov 24 18:19:37 compute-0 ceph-mon[74927]: [24/Nov/2025:18:19:36] ENGINE Client ('192.168.122.100', 47586) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 24 18:19:37 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 18:19:37 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:37 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-4388d472766a5fb4b5fccb512a0127dd8f20f7025e523751d03fb262ffdaad85-merged.mount: Deactivated successfully.
Nov 24 18:19:37 compute-0 podman[76322]: 2025-11-24 18:19:37.68614542 +0000 UTC m=+0.244699723 container remove f399ace9ec70427cc31f846cf309ff923c3c7885f6392c81d686b4f1ae6ced2d (image=quay.io/ceph/ceph:v18, name=nostalgic_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 24 18:19:37 compute-0 systemd[1]: libpod-conmon-f399ace9ec70427cc31f846cf309ff923c3c7885f6392c81d686b4f1ae6ced2d.scope: Deactivated successfully.
Nov 24 18:19:37 compute-0 podman[76338]: 2025-11-24 18:19:37.783468602 +0000 UTC m=+0.059990644 container create 7f22f8b947ebccabb8c867387309fe88fbaa0223efacf2c049719803eebd5d52 (image=quay.io/ceph/ceph:v18, name=pensive_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 24 18:19:37 compute-0 systemd[1]: Started libpod-conmon-7f22f8b947ebccabb8c867387309fe88fbaa0223efacf2c049719803eebd5d52.scope.
Nov 24 18:19:37 compute-0 podman[76338]: 2025-11-24 18:19:37.75658544 +0000 UTC m=+0.033107502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:19:37 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:19:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5f959a7dc88831e2783326b51c05482402b3e036bd516caab9692fa11ce098/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5f959a7dc88831e2783326b51c05482402b3e036bd516caab9692fa11ce098/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5f959a7dc88831e2783326b51c05482402b3e036bd516caab9692fa11ce098/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5f959a7dc88831e2783326b51c05482402b3e036bd516caab9692fa11ce098/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5f959a7dc88831e2783326b51c05482402b3e036bd516caab9692fa11ce098/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:37 compute-0 podman[76338]: 2025-11-24 18:19:37.875260861 +0000 UTC m=+0.151782933 container init 7f22f8b947ebccabb8c867387309fe88fbaa0223efacf2c049719803eebd5d52 (image=quay.io/ceph/ceph:v18, name=pensive_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 24 18:19:37 compute-0 podman[76338]: 2025-11-24 18:19:37.884594981 +0000 UTC m=+0.161117023 container start 7f22f8b947ebccabb8c867387309fe88fbaa0223efacf2c049719803eebd5d52 (image=quay.io/ceph/ceph:v18, name=pensive_hofstadter, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:19:37 compute-0 podman[76338]: 2025-11-24 18:19:37.899761031 +0000 UTC m=+0.176283293 container attach 7f22f8b947ebccabb8c867387309fe88fbaa0223efacf2c049719803eebd5d52 (image=quay.io/ceph/ceph:v18, name=pensive_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 18:19:38 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:19:38 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Nov 24 18:19:38 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:38 compute-0 ceph-mgr[75218]: [cephadm INFO root] Set ssh ssh_identity_key
Nov 24 18:19:38 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Nov 24 18:19:38 compute-0 ceph-mgr[75218]: [cephadm INFO root] Set ssh private key
Nov 24 18:19:38 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Set ssh private key
Nov 24 18:19:38 compute-0 systemd[1]: libpod-7f22f8b947ebccabb8c867387309fe88fbaa0223efacf2c049719803eebd5d52.scope: Deactivated successfully.
Nov 24 18:19:38 compute-0 podman[76338]: 2025-11-24 18:19:38.452852151 +0000 UTC m=+0.729374223 container died 7f22f8b947ebccabb8c867387309fe88fbaa0223efacf2c049719803eebd5d52 (image=quay.io/ceph/ceph:v18, name=pensive_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 18:19:38 compute-0 ceph-mgr[75218]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 18:19:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f5f959a7dc88831e2783326b51c05482402b3e036bd516caab9692fa11ce098-merged.mount: Deactivated successfully.
Nov 24 18:19:38 compute-0 podman[76338]: 2025-11-24 18:19:38.542259349 +0000 UTC m=+0.818781381 container remove 7f22f8b947ebccabb8c867387309fe88fbaa0223efacf2c049719803eebd5d52 (image=quay.io/ceph/ceph:v18, name=pensive_hofstadter, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 24 18:19:38 compute-0 systemd[1]: libpod-conmon-7f22f8b947ebccabb8c867387309fe88fbaa0223efacf2c049719803eebd5d52.scope: Deactivated successfully.
Nov 24 18:19:38 compute-0 ceph-mon[74927]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:19:38 compute-0 ceph-mon[74927]: Set ssh ssh_user
Nov 24 18:19:38 compute-0 ceph-mon[74927]: Set ssh ssh_config
Nov 24 18:19:38 compute-0 ceph-mon[74927]: ssh user set to ceph-admin. sudo will be used
Nov 24 18:19:38 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:38 compute-0 podman[76392]: 2025-11-24 18:19:38.657884442 +0000 UTC m=+0.082819170 container create 6eecba045b8303400ec58207ac05ae5e1404dd37fa824583c1f4df715b5ffde4 (image=quay.io/ceph/ceph:v18, name=elegant_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:19:38 compute-0 systemd[1]: Started libpod-conmon-6eecba045b8303400ec58207ac05ae5e1404dd37fa824583c1f4df715b5ffde4.scope.
Nov 24 18:19:38 compute-0 podman[76392]: 2025-11-24 18:19:38.618641743 +0000 UTC m=+0.043576551 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:19:38 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:19:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40a7b6f5991e6db7ccfd6dd8450addf11d6acc89611c01c7e08958a506bf8190/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40a7b6f5991e6db7ccfd6dd8450addf11d6acc89611c01c7e08958a506bf8190/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40a7b6f5991e6db7ccfd6dd8450addf11d6acc89611c01c7e08958a506bf8190/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40a7b6f5991e6db7ccfd6dd8450addf11d6acc89611c01c7e08958a506bf8190/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40a7b6f5991e6db7ccfd6dd8450addf11d6acc89611c01c7e08958a506bf8190/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:38 compute-0 podman[76392]: 2025-11-24 18:19:38.76359799 +0000 UTC m=+0.188532748 container init 6eecba045b8303400ec58207ac05ae5e1404dd37fa824583c1f4df715b5ffde4 (image=quay.io/ceph/ceph:v18, name=elegant_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 24 18:19:38 compute-0 podman[76392]: 2025-11-24 18:19:38.777038335 +0000 UTC m=+0.201973063 container start 6eecba045b8303400ec58207ac05ae5e1404dd37fa824583c1f4df715b5ffde4 (image=quay.io/ceph/ceph:v18, name=elegant_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:19:38 compute-0 podman[76392]: 2025-11-24 18:19:38.813116373 +0000 UTC m=+0.238051131 container attach 6eecba045b8303400ec58207ac05ae5e1404dd37fa824583c1f4df715b5ffde4 (image=quay.io/ceph/ceph:v18, name=elegant_thompson, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 18:19:39 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:19:39 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Nov 24 18:19:39 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:39 compute-0 ceph-mgr[75218]: [cephadm INFO root] Set ssh ssh_identity_pub
Nov 24 18:19:39 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Nov 24 18:19:39 compute-0 systemd[1]: libpod-6eecba045b8303400ec58207ac05ae5e1404dd37fa824583c1f4df715b5ffde4.scope: Deactivated successfully.
Nov 24 18:19:39 compute-0 podman[76392]: 2025-11-24 18:19:39.380402257 +0000 UTC m=+0.805336995 container died 6eecba045b8303400ec58207ac05ae5e1404dd37fa824583c1f4df715b5ffde4 (image=quay.io/ceph/ceph:v18, name=elegant_thompson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:19:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-40a7b6f5991e6db7ccfd6dd8450addf11d6acc89611c01c7e08958a506bf8190-merged.mount: Deactivated successfully.
Nov 24 18:19:39 compute-0 podman[76392]: 2025-11-24 18:19:39.42212212 +0000 UTC m=+0.847056838 container remove 6eecba045b8303400ec58207ac05ae5e1404dd37fa824583c1f4df715b5ffde4 (image=quay.io/ceph/ceph:v18, name=elegant_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:19:39 compute-0 systemd[1]: libpod-conmon-6eecba045b8303400ec58207ac05ae5e1404dd37fa824583c1f4df715b5ffde4.scope: Deactivated successfully.
Nov 24 18:19:39 compute-0 podman[76448]: 2025-11-24 18:19:39.49100562 +0000 UTC m=+0.047043710 container create b105046f208ef1e133dad7ea43dacddd577f07c8ce07351ecf4f19c1792d9cce (image=quay.io/ceph/ceph:v18, name=xenodochial_kowalevski, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 24 18:19:39 compute-0 systemd[1]: Started libpod-conmon-b105046f208ef1e133dad7ea43dacddd577f07c8ce07351ecf4f19c1792d9cce.scope.
Nov 24 18:19:39 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f67277e9e3a1e3eae095f1d48617c881a7f3291f07c84c85b1cb54032e68f812/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f67277e9e3a1e3eae095f1d48617c881a7f3291f07c84c85b1cb54032e68f812/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f67277e9e3a1e3eae095f1d48617c881a7f3291f07c84c85b1cb54032e68f812/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:39 compute-0 podman[76448]: 2025-11-24 18:19:39.471496699 +0000 UTC m=+0.027534839 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:19:39 compute-0 podman[76448]: 2025-11-24 18:19:39.578544801 +0000 UTC m=+0.134582991 container init b105046f208ef1e133dad7ea43dacddd577f07c8ce07351ecf4f19c1792d9cce (image=quay.io/ceph/ceph:v18, name=xenodochial_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:19:39 compute-0 ceph-mon[74927]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:19:39 compute-0 ceph-mon[74927]: Set ssh ssh_identity_key
Nov 24 18:19:39 compute-0 ceph-mon[74927]: Set ssh private key
Nov 24 18:19:39 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:39 compute-0 podman[76448]: 2025-11-24 18:19:39.589438271 +0000 UTC m=+0.145476401 container start b105046f208ef1e133dad7ea43dacddd577f07c8ce07351ecf4f19c1792d9cce (image=quay.io/ceph/ceph:v18, name=xenodochial_kowalevski, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:19:39 compute-0 podman[76448]: 2025-11-24 18:19:39.593053244 +0000 UTC m=+0.149091394 container attach b105046f208ef1e133dad7ea43dacddd577f07c8ce07351ecf4f19c1792d9cce (image=quay.io/ceph/ceph:v18, name=xenodochial_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:19:40 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:19:40 compute-0 xenodochial_kowalevski[76464]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDnwLNSoQ1U4EYT2Q7m2Q5aKUGikv/XawShYlOYI35cpqwzRLkTEvLJjiIInPBov/eo3EFPjZLZRWwJd5tZAhtM95ag00ajKW5pelqURnSlF7z1/HInd4lbjORN0QwA0gjOZZQi9kfU7tP/WdoAfZTzrAwq7PjCh7OBVUi1etEQC/A6BsmjMGLY6PvF6MaZ+Z6LGWzLcXfHv4ThRnT6eHoM3jv/bFkRBViEOTJlYlQ7B7TcWSZXO6bGXFl4HvSqhC+aZcB+owA2Pdf8RyhIyU2teCvqYpnt7LS3AzxAl/tDUVSayFoYbXYsGDMnb/5Jij9dZC1lB0SA9wN0yihcs5xxxLN9+M2njfsSyGQMbFluGmtjAYhFK9JZnB/xJWlMNIjNgFWc3art2/Ze4647fufJ77gn+G0O+duY/FV6mM8yPCIQjt7xbNhH14P7NMRZ9m2xWvh2GwBefsp7IEwmmWbUuAg/U2I3FmeZW4kkFK9I15FWokIqwulUQZ1yUN/x9xs= zuul@controller
Nov 24 18:19:40 compute-0 systemd[1]: libpod-b105046f208ef1e133dad7ea43dacddd577f07c8ce07351ecf4f19c1792d9cce.scope: Deactivated successfully.
Nov 24 18:19:40 compute-0 podman[76448]: 2025-11-24 18:19:40.113565496 +0000 UTC m=+0.669603606 container died b105046f208ef1e133dad7ea43dacddd577f07c8ce07351ecf4f19c1792d9cce (image=quay.io/ceph/ceph:v18, name=xenodochial_kowalevski, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 24 18:19:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-f67277e9e3a1e3eae095f1d48617c881a7f3291f07c84c85b1cb54032e68f812-merged.mount: Deactivated successfully.
Nov 24 18:19:40 compute-0 podman[76448]: 2025-11-24 18:19:40.160730498 +0000 UTC m=+0.716768588 container remove b105046f208ef1e133dad7ea43dacddd577f07c8ce07351ecf4f19c1792d9cce (image=quay.io/ceph/ceph:v18, name=xenodochial_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 18:19:40 compute-0 systemd[1]: libpod-conmon-b105046f208ef1e133dad7ea43dacddd577f07c8ce07351ecf4f19c1792d9cce.scope: Deactivated successfully.
Nov 24 18:19:40 compute-0 podman[76502]: 2025-11-24 18:19:40.225644077 +0000 UTC m=+0.042701878 container create c207e83c64ad15427860f70a7517741ae98b14fa7c42562593c638e7dccce998 (image=quay.io/ceph/ceph:v18, name=flamboyant_ramanujan, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:19:40 compute-0 systemd[1]: Started libpod-conmon-c207e83c64ad15427860f70a7517741ae98b14fa7c42562593c638e7dccce998.scope.
Nov 24 18:19:40 compute-0 podman[76502]: 2025-11-24 18:19:40.207198413 +0000 UTC m=+0.024256254 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:19:40 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:19:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83cd63f90b30827b6c65e8918765df44dedba81491d2f696c250ede48d4d1a6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83cd63f90b30827b6c65e8918765df44dedba81491d2f696c250ede48d4d1a6b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83cd63f90b30827b6c65e8918765df44dedba81491d2f696c250ede48d4d1a6b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:40 compute-0 podman[76502]: 2025-11-24 18:19:40.351637377 +0000 UTC m=+0.168695248 container init c207e83c64ad15427860f70a7517741ae98b14fa7c42562593c638e7dccce998 (image=quay.io/ceph/ceph:v18, name=flamboyant_ramanujan, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Nov 24 18:19:40 compute-0 podman[76502]: 2025-11-24 18:19:40.357006705 +0000 UTC m=+0.174064536 container start c207e83c64ad15427860f70a7517741ae98b14fa7c42562593c638e7dccce998 (image=quay.io/ceph/ceph:v18, name=flamboyant_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:19:40 compute-0 podman[76502]: 2025-11-24 18:19:40.384673476 +0000 UTC m=+0.201731277 container attach c207e83c64ad15427860f70a7517741ae98b14fa7c42562593c638e7dccce998 (image=quay.io/ceph/ceph:v18, name=flamboyant_ramanujan, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 24 18:19:40 compute-0 ceph-mgr[75218]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 18:19:40 compute-0 ceph-mon[74927]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:19:40 compute-0 ceph-mon[74927]: Set ssh ssh_identity_pub
Nov 24 18:19:40 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:19:41 compute-0 sshd-session[76544]: Accepted publickey for ceph-admin from 192.168.122.100 port 59264 ssh2: RSA SHA256:UuqXPbG/GIyS8E+86W5iuX+TvUSxrYgcGPs97QlP9LE
Nov 24 18:19:41 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Nov 24 18:19:41 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 24 18:19:41 compute-0 systemd-logind[822]: New session 20 of user ceph-admin.
Nov 24 18:19:41 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 24 18:19:41 compute-0 systemd[1]: Starting User Manager for UID 42477...
Nov 24 18:19:41 compute-0 systemd[76548]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 18:19:41 compute-0 sshd-session[76551]: Accepted publickey for ceph-admin from 192.168.122.100 port 45662 ssh2: RSA SHA256:UuqXPbG/GIyS8E+86W5iuX+TvUSxrYgcGPs97QlP9LE
Nov 24 18:19:41 compute-0 systemd-logind[822]: New session 22 of user ceph-admin.
Nov 24 18:19:41 compute-0 systemd[76548]: Queued start job for default target Main User Target.
Nov 24 18:19:41 compute-0 systemd[76548]: Created slice User Application Slice.
Nov 24 18:19:41 compute-0 systemd[76548]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 24 18:19:41 compute-0 systemd[76548]: Started Daily Cleanup of User's Temporary Directories.
Nov 24 18:19:41 compute-0 systemd[76548]: Reached target Paths.
Nov 24 18:19:41 compute-0 systemd[76548]: Reached target Timers.
Nov 24 18:19:41 compute-0 systemd[76548]: Starting D-Bus User Message Bus Socket...
Nov 24 18:19:41 compute-0 systemd[76548]: Starting Create User's Volatile Files and Directories...
Nov 24 18:19:41 compute-0 systemd[76548]: Finished Create User's Volatile Files and Directories.
Nov 24 18:19:41 compute-0 systemd[76548]: Listening on D-Bus User Message Bus Socket.
Nov 24 18:19:41 compute-0 systemd[76548]: Reached target Sockets.
Nov 24 18:19:41 compute-0 systemd[76548]: Reached target Basic System.
Nov 24 18:19:41 compute-0 systemd[76548]: Reached target Main User Target.
Nov 24 18:19:41 compute-0 systemd[76548]: Startup finished in 189ms.
Nov 24 18:19:41 compute-0 systemd[1]: Started User Manager for UID 42477.
Nov 24 18:19:41 compute-0 systemd[1]: Started Session 20 of User ceph-admin.
Nov 24 18:19:41 compute-0 systemd[1]: Started Session 22 of User ceph-admin.
Nov 24 18:19:41 compute-0 sshd-session[76544]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 18:19:41 compute-0 sshd-session[76551]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 18:19:41 compute-0 sudo[76568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:41 compute-0 sudo[76568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:41 compute-0 sudo[76568]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:41 compute-0 ceph-mon[74927]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:19:41 compute-0 ceph-mon[74927]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:19:41 compute-0 sudo[76593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:19:41 compute-0 sudo[76593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:41 compute-0 sudo[76593]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:41 compute-0 sshd-session[76618]: Accepted publickey for ceph-admin from 192.168.122.100 port 45664 ssh2: RSA SHA256:UuqXPbG/GIyS8E+86W5iuX+TvUSxrYgcGPs97QlP9LE
Nov 24 18:19:41 compute-0 systemd-logind[822]: New session 23 of user ceph-admin.
Nov 24 18:19:41 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Nov 24 18:19:41 compute-0 sshd-session[76618]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 18:19:42 compute-0 sudo[76622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:42 compute-0 sudo[76622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:42 compute-0 sudo[76622]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:42 compute-0 sudo[76647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Nov 24 18:19:42 compute-0 sudo[76647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:42 compute-0 sudo[76647]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:42 compute-0 sshd-session[76672]: Accepted publickey for ceph-admin from 192.168.122.100 port 45678 ssh2: RSA SHA256:UuqXPbG/GIyS8E+86W5iuX+TvUSxrYgcGPs97QlP9LE
Nov 24 18:19:42 compute-0 systemd-logind[822]: New session 24 of user ceph-admin.
Nov 24 18:19:42 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Nov 24 18:19:42 compute-0 sshd-session[76672]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 18:19:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053071 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:19:42 compute-0 ceph-mgr[75218]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 18:19:42 compute-0 sudo[76676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:42 compute-0 sudo[76676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:42 compute-0 sudo[76676]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:42 compute-0 sudo[76701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Nov 24 18:19:42 compute-0 sudo[76701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:42 compute-0 sudo[76701]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:42 compute-0 ceph-mgr[75218]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Nov 24 18:19:42 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Nov 24 18:19:42 compute-0 sshd-session[76726]: Accepted publickey for ceph-admin from 192.168.122.100 port 45684 ssh2: RSA SHA256:UuqXPbG/GIyS8E+86W5iuX+TvUSxrYgcGPs97QlP9LE
Nov 24 18:19:42 compute-0 systemd-logind[822]: New session 25 of user ceph-admin.
Nov 24 18:19:42 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Nov 24 18:19:42 compute-0 sshd-session[76726]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 18:19:42 compute-0 sudo[76730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:42 compute-0 sudo[76730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:42 compute-0 sudo[76730]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:43 compute-0 sudo[76755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d
Nov 24 18:19:43 compute-0 sudo[76755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:43 compute-0 sudo[76755]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:43 compute-0 ceph-mon[74927]: Deploying cephadm binary to compute-0
Nov 24 18:19:43 compute-0 sshd-session[76780]: Accepted publickey for ceph-admin from 192.168.122.100 port 45688 ssh2: RSA SHA256:UuqXPbG/GIyS8E+86W5iuX+TvUSxrYgcGPs97QlP9LE
Nov 24 18:19:43 compute-0 systemd-logind[822]: New session 26 of user ceph-admin.
Nov 24 18:19:43 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Nov 24 18:19:43 compute-0 sshd-session[76780]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 18:19:43 compute-0 sudo[76784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:43 compute-0 sudo[76784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:43 compute-0 sudo[76784]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:43 compute-0 sudo[76809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e5ee928f-099b-569b-93c9-ecf025cbb50d/var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d
Nov 24 18:19:43 compute-0 sudo[76809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:43 compute-0 sudo[76809]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:43 compute-0 sshd-session[76834]: Accepted publickey for ceph-admin from 192.168.122.100 port 45696 ssh2: RSA SHA256:UuqXPbG/GIyS8E+86W5iuX+TvUSxrYgcGPs97QlP9LE
Nov 24 18:19:43 compute-0 systemd-logind[822]: New session 27 of user ceph-admin.
Nov 24 18:19:43 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Nov 24 18:19:43 compute-0 sshd-session[76834]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 18:19:43 compute-0 sudo[76838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:43 compute-0 sudo[76838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:43 compute-0 sudo[76838]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:43 compute-0 sudo[76863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e5ee928f-099b-569b-93c9-ecf025cbb50d/var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Nov 24 18:19:44 compute-0 sudo[76863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:44 compute-0 sudo[76863]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:44 compute-0 sshd-session[76888]: Accepted publickey for ceph-admin from 192.168.122.100 port 45706 ssh2: RSA SHA256:UuqXPbG/GIyS8E+86W5iuX+TvUSxrYgcGPs97QlP9LE
Nov 24 18:19:44 compute-0 systemd-logind[822]: New session 28 of user ceph-admin.
Nov 24 18:19:44 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Nov 24 18:19:44 compute-0 sshd-session[76888]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 18:19:44 compute-0 sudo[76892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:44 compute-0 sudo[76892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:44 compute-0 sudo[76892]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:44 compute-0 ceph-mgr[75218]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 18:19:44 compute-0 sudo[76917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e5ee928f-099b-569b-93c9-ecf025cbb50d
Nov 24 18:19:44 compute-0 sudo[76917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:44 compute-0 sudo[76917]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:44 compute-0 sshd-session[76942]: Accepted publickey for ceph-admin from 192.168.122.100 port 45708 ssh2: RSA SHA256:UuqXPbG/GIyS8E+86W5iuX+TvUSxrYgcGPs97QlP9LE
Nov 24 18:19:44 compute-0 systemd-logind[822]: New session 29 of user ceph-admin.
Nov 24 18:19:44 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Nov 24 18:19:44 compute-0 sshd-session[76942]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 18:19:44 compute-0 sudo[76946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:44 compute-0 sudo[76946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:44 compute-0 sudo[76946]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:44 compute-0 sudo[76971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e5ee928f-099b-569b-93c9-ecf025cbb50d/var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Nov 24 18:19:44 compute-0 sudo[76971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:44 compute-0 sudo[76971]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:45 compute-0 sshd-session[76996]: Accepted publickey for ceph-admin from 192.168.122.100 port 45710 ssh2: RSA SHA256:UuqXPbG/GIyS8E+86W5iuX+TvUSxrYgcGPs97QlP9LE
Nov 24 18:19:45 compute-0 systemd-logind[822]: New session 30 of user ceph-admin.
Nov 24 18:19:45 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Nov 24 18:19:45 compute-0 sshd-session[76996]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 18:19:45 compute-0 sshd-session[77023]: Accepted publickey for ceph-admin from 192.168.122.100 port 45726 ssh2: RSA SHA256:UuqXPbG/GIyS8E+86W5iuX+TvUSxrYgcGPs97QlP9LE
Nov 24 18:19:45 compute-0 systemd-logind[822]: New session 31 of user ceph-admin.
Nov 24 18:19:45 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Nov 24 18:19:45 compute-0 sshd-session[77023]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 18:19:45 compute-0 sudo[77027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:45 compute-0 sudo[77027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:45 compute-0 sudo[77027]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:46 compute-0 sudo[77052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-e5ee928f-099b-569b-93c9-ecf025cbb50d/var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Nov 24 18:19:46 compute-0 sudo[77052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:46 compute-0 sudo[77052]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:46 compute-0 sshd-session[77077]: Accepted publickey for ceph-admin from 192.168.122.100 port 45732 ssh2: RSA SHA256:UuqXPbG/GIyS8E+86W5iuX+TvUSxrYgcGPs97QlP9LE
Nov 24 18:19:46 compute-0 systemd-logind[822]: New session 32 of user ceph-admin.
Nov 24 18:19:46 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Nov 24 18:19:46 compute-0 sshd-session[77077]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 18:19:46 compute-0 sudo[77081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:46 compute-0 sudo[77081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:46 compute-0 sudo[77081]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:46 compute-0 sudo[77106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Nov 24 18:19:46 compute-0 sudo[77106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:46 compute-0 ceph-mgr[75218]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 18:19:46 compute-0 sudo[77106]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:46 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 24 18:19:46 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:46 compute-0 ceph-mgr[75218]: [cephadm INFO root] Added host compute-0
Nov 24 18:19:46 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 24 18:19:46 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 24 18:19:46 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 18:19:46 compute-0 flamboyant_ramanujan[76518]: Added host 'compute-0' with addr '192.168.122.100'
Nov 24 18:19:46 compute-0 systemd[1]: libpod-c207e83c64ad15427860f70a7517741ae98b14fa7c42562593c638e7dccce998.scope: Deactivated successfully.
Nov 24 18:19:46 compute-0 sudo[77152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:46 compute-0 sudo[77152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:46 compute-0 sudo[77152]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:46 compute-0 podman[77170]: 2025-11-24 18:19:46.840117825 +0000 UTC m=+0.038950583 container died c207e83c64ad15427860f70a7517741ae98b14fa7c42562593c638e7dccce998 (image=quay.io/ceph/ceph:v18, name=flamboyant_ramanujan, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:19:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-83cd63f90b30827b6c65e8918765df44dedba81491d2f696c250ede48d4d1a6b-merged.mount: Deactivated successfully.
Nov 24 18:19:46 compute-0 podman[77170]: 2025-11-24 18:19:46.884306351 +0000 UTC m=+0.083139059 container remove c207e83c64ad15427860f70a7517741ae98b14fa7c42562593c638e7dccce998 (image=quay.io/ceph/ceph:v18, name=flamboyant_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 24 18:19:46 compute-0 systemd[1]: libpod-conmon-c207e83c64ad15427860f70a7517741ae98b14fa7c42562593c638e7dccce998.scope: Deactivated successfully.
Nov 24 18:19:46 compute-0 sudo[77192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:19:46 compute-0 sudo[77192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:46 compute-0 sudo[77192]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:46 compute-0 podman[77217]: 2025-11-24 18:19:46.958677403 +0000 UTC m=+0.044060164 container create 57fd4a3861670ad14ac25eaf1dfde2bb94c738e14b79c09019ca2fdadd121613 (image=quay.io/ceph/ceph:v18, name=blissful_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:19:46 compute-0 sudo[77226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:46 compute-0 sudo[77226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:46 compute-0 sudo[77226]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:46 compute-0 systemd[1]: Started libpod-conmon-57fd4a3861670ad14ac25eaf1dfde2bb94c738e14b79c09019ca2fdadd121613.scope.
Nov 24 18:19:47 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:19:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ce5f3305c03bb9954ae78d5b9d467945b633a6b5b2a2fc8f94aa15713d1a737/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ce5f3305c03bb9954ae78d5b9d467945b633a6b5b2a2fc8f94aa15713d1a737/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ce5f3305c03bb9954ae78d5b9d467945b633a6b5b2a2fc8f94aa15713d1a737/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:47 compute-0 podman[77217]: 2025-11-24 18:19:46.940072464 +0000 UTC m=+0.025455245 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:19:47 compute-0 sudo[77261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph:v18 --timeout 895 inspect-image
Nov 24 18:19:47 compute-0 podman[77217]: 2025-11-24 18:19:47.044283244 +0000 UTC m=+0.129666005 container init 57fd4a3861670ad14ac25eaf1dfde2bb94c738e14b79c09019ca2fdadd121613 (image=quay.io/ceph/ceph:v18, name=blissful_saha, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Nov 24 18:19:47 compute-0 sudo[77261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:47 compute-0 podman[77217]: 2025-11-24 18:19:47.055099172 +0000 UTC m=+0.140481933 container start 57fd4a3861670ad14ac25eaf1dfde2bb94c738e14b79c09019ca2fdadd121613 (image=quay.io/ceph/ceph:v18, name=blissful_saha, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:19:47 compute-0 podman[77217]: 2025-11-24 18:19:47.058207411 +0000 UTC m=+0.143590172 container attach 57fd4a3861670ad14ac25eaf1dfde2bb94c738e14b79c09019ca2fdadd121613 (image=quay.io/ceph/ceph:v18, name=blissful_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 24 18:19:47 compute-0 podman[77318]: 2025-11-24 18:19:47.330175343 +0000 UTC m=+0.051921835 container create e70fde046f0eb1e9c64b91c1e2366a43d14ef14b207ba2e722165e0dc74675f5 (image=quay.io/ceph/ceph:v18, name=awesome_lovelace, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 24 18:19:47 compute-0 systemd[1]: Started libpod-conmon-e70fde046f0eb1e9c64b91c1e2366a43d14ef14b207ba2e722165e0dc74675f5.scope.
Nov 24 18:19:47 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:19:47 compute-0 podman[77318]: 2025-11-24 18:19:47.304539634 +0000 UTC m=+0.026286126 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:19:47 compute-0 podman[77318]: 2025-11-24 18:19:47.406378562 +0000 UTC m=+0.128125074 container init e70fde046f0eb1e9c64b91c1e2366a43d14ef14b207ba2e722165e0dc74675f5 (image=quay.io/ceph/ceph:v18, name=awesome_lovelace, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Nov 24 18:19:47 compute-0 podman[77318]: 2025-11-24 18:19:47.413599128 +0000 UTC m=+0.135345620 container start e70fde046f0eb1e9c64b91c1e2366a43d14ef14b207ba2e722165e0dc74675f5 (image=quay.io/ceph/ceph:v18, name=awesome_lovelace, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:19:47 compute-0 podman[77318]: 2025-11-24 18:19:47.416950814 +0000 UTC m=+0.138697316 container attach e70fde046f0eb1e9c64b91c1e2366a43d14ef14b207ba2e722165e0dc74675f5 (image=quay.io/ceph/ceph:v18, name=awesome_lovelace, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 24 18:19:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054710 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:19:47 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:19:47 compute-0 ceph-mgr[75218]: [cephadm INFO root] Saving service mon spec with placement count:5
Nov 24 18:19:47 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Nov 24 18:19:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 24 18:19:47 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:47 compute-0 blissful_saha[77265]: Scheduled mon update...
Nov 24 18:19:47 compute-0 systemd[1]: libpod-57fd4a3861670ad14ac25eaf1dfde2bb94c738e14b79c09019ca2fdadd121613.scope: Deactivated successfully.
Nov 24 18:19:47 compute-0 podman[77217]: 2025-11-24 18:19:47.658419992 +0000 UTC m=+0.743802753 container died 57fd4a3861670ad14ac25eaf1dfde2bb94c738e14b79c09019ca2fdadd121613 (image=quay.io/ceph/ceph:v18, name=blissful_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 18:19:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ce5f3305c03bb9954ae78d5b9d467945b633a6b5b2a2fc8f94aa15713d1a737-merged.mount: Deactivated successfully.
Nov 24 18:19:47 compute-0 podman[77217]: 2025-11-24 18:19:47.699335844 +0000 UTC m=+0.784718605 container remove 57fd4a3861670ad14ac25eaf1dfde2bb94c738e14b79c09019ca2fdadd121613 (image=quay.io/ceph/ceph:v18, name=blissful_saha, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 18:19:47 compute-0 awesome_lovelace[77335]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 24 18:19:47 compute-0 systemd[1]: libpod-conmon-57fd4a3861670ad14ac25eaf1dfde2bb94c738e14b79c09019ca2fdadd121613.scope: Deactivated successfully.
Nov 24 18:19:47 compute-0 systemd[1]: libpod-e70fde046f0eb1e9c64b91c1e2366a43d14ef14b207ba2e722165e0dc74675f5.scope: Deactivated successfully.
Nov 24 18:19:47 compute-0 podman[77318]: 2025-11-24 18:19:47.721595576 +0000 UTC m=+0.443342068 container died e70fde046f0eb1e9c64b91c1e2366a43d14ef14b207ba2e722165e0dc74675f5 (image=quay.io/ceph/ceph:v18, name=awesome_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:19:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-e01b3fd59ae997e6b6ee6f237d44f20dda4cb1424c30a4c40e090240d3355a68-merged.mount: Deactivated successfully.
Nov 24 18:19:47 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:47 compute-0 ceph-mon[74927]: Added host compute-0
Nov 24 18:19:47 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 18:19:47 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:47 compute-0 podman[77318]: 2025-11-24 18:19:47.770588996 +0000 UTC m=+0.492335488 container remove e70fde046f0eb1e9c64b91c1e2366a43d14ef14b207ba2e722165e0dc74675f5 (image=quay.io/ceph/ceph:v18, name=awesome_lovelace, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 24 18:19:47 compute-0 systemd[1]: libpod-conmon-e70fde046f0eb1e9c64b91c1e2366a43d14ef14b207ba2e722165e0dc74675f5.scope: Deactivated successfully.
Nov 24 18:19:47 compute-0 podman[77373]: 2025-11-24 18:19:47.793966147 +0000 UTC m=+0.068952314 container create ce247e1536e049c78708010198f4b3e3e7726b574f2ff49b2f8ec4529c4d88fe (image=quay.io/ceph/ceph:v18, name=awesome_cori, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 18:19:47 compute-0 sudo[77261]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Nov 24 18:19:47 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:47 compute-0 systemd[1]: Started libpod-conmon-ce247e1536e049c78708010198f4b3e3e7726b574f2ff49b2f8ec4529c4d88fe.scope.
Nov 24 18:19:47 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:19:47 compute-0 podman[77373]: 2025-11-24 18:19:47.772377412 +0000 UTC m=+0.047363609 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:19:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9249677785a959dea2784ff02723449ab10a580f8f73bfb413e0048c0cc61a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9249677785a959dea2784ff02723449ab10a580f8f73bfb413e0048c0cc61a2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9249677785a959dea2784ff02723449ab10a580f8f73bfb413e0048c0cc61a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:47 compute-0 podman[77373]: 2025-11-24 18:19:47.884406852 +0000 UTC m=+0.159393049 container init ce247e1536e049c78708010198f4b3e3e7726b574f2ff49b2f8ec4529c4d88fe (image=quay.io/ceph/ceph:v18, name=awesome_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 24 18:19:47 compute-0 podman[77373]: 2025-11-24 18:19:47.890034627 +0000 UTC m=+0.165020804 container start ce247e1536e049c78708010198f4b3e3e7726b574f2ff49b2f8ec4529c4d88fe (image=quay.io/ceph/ceph:v18, name=awesome_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 18:19:47 compute-0 sudo[77403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:47 compute-0 sudo[77403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:47 compute-0 podman[77373]: 2025-11-24 18:19:47.894908212 +0000 UTC m=+0.169894389 container attach ce247e1536e049c78708010198f4b3e3e7726b574f2ff49b2f8ec4529c4d88fe (image=quay.io/ceph/ceph:v18, name=awesome_cori, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 18:19:47 compute-0 sudo[77403]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:47 compute-0 sudo[77434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:19:47 compute-0 sudo[77434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:47 compute-0 sudo[77434]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:48 compute-0 sudo[77459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:48 compute-0 sudo[77459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:48 compute-0 sudo[77459]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:48 compute-0 sudo[77484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 24 18:19:48 compute-0 sudo[77484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:48 compute-0 sudo[77484]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:48 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:19:48 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:48 compute-0 sudo[77546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:48 compute-0 sudo[77546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:48 compute-0 sudo[77546]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:48 compute-0 sudo[77571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:19:48 compute-0 sudo[77571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:48 compute-0 sudo[77571]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:48 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:19:48 compute-0 ceph-mgr[75218]: [cephadm INFO root] Saving service mgr spec with placement count:2
Nov 24 18:19:48 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Nov 24 18:19:48 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 24 18:19:48 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:48 compute-0 awesome_cori[77407]: Scheduled mgr update...
Nov 24 18:19:48 compute-0 sudo[77596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:48 compute-0 sudo[77596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:48 compute-0 sudo[77596]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:48 compute-0 systemd[1]: libpod-ce247e1536e049c78708010198f4b3e3e7726b574f2ff49b2f8ec4529c4d88fe.scope: Deactivated successfully.
Nov 24 18:19:48 compute-0 podman[77373]: 2025-11-24 18:19:48.444423919 +0000 UTC m=+0.719410096 container died ce247e1536e049c78708010198f4b3e3e7726b574f2ff49b2f8ec4529c4d88fe (image=quay.io/ceph/ceph:v18, name=awesome_cori, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:19:48 compute-0 ceph-mgr[75218]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 18:19:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9249677785a959dea2784ff02723449ab10a580f8f73bfb413e0048c0cc61a2-merged.mount: Deactivated successfully.
Nov 24 18:19:48 compute-0 podman[77373]: 2025-11-24 18:19:48.479788238 +0000 UTC m=+0.754774415 container remove ce247e1536e049c78708010198f4b3e3e7726b574f2ff49b2f8ec4529c4d88fe (image=quay.io/ceph/ceph:v18, name=awesome_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 18:19:48 compute-0 systemd[1]: libpod-conmon-ce247e1536e049c78708010198f4b3e3e7726b574f2ff49b2f8ec4529c4d88fe.scope: Deactivated successfully.
Nov 24 18:19:48 compute-0 sudo[77623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 24 18:19:48 compute-0 sudo[77623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:48 compute-0 podman[77657]: 2025-11-24 18:19:48.533323384 +0000 UTC m=+0.035377170 container create 2b100e0067e4f581f65b833cc7f94a833eaf1c7812e04226f60c68fc0a3c67ec (image=quay.io/ceph/ceph:v18, name=awesome_cohen, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 18:19:48 compute-0 systemd[1]: Started libpod-conmon-2b100e0067e4f581f65b833cc7f94a833eaf1c7812e04226f60c68fc0a3c67ec.scope.
Nov 24 18:19:48 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:19:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f68b85bafb1193e052ead1bf48e693045ec831a1d095b51466004768e9551a5a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f68b85bafb1193e052ead1bf48e693045ec831a1d095b51466004768e9551a5a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f68b85bafb1193e052ead1bf48e693045ec831a1d095b51466004768e9551a5a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:48 compute-0 podman[77657]: 2025-11-24 18:19:48.597226387 +0000 UTC m=+0.099280253 container init 2b100e0067e4f581f65b833cc7f94a833eaf1c7812e04226f60c68fc0a3c67ec (image=quay.io/ceph/ceph:v18, name=awesome_cohen, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:19:48 compute-0 podman[77657]: 2025-11-24 18:19:48.60394697 +0000 UTC m=+0.106000766 container start 2b100e0067e4f581f65b833cc7f94a833eaf1c7812e04226f60c68fc0a3c67ec (image=quay.io/ceph/ceph:v18, name=awesome_cohen, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:19:48 compute-0 podman[77657]: 2025-11-24 18:19:48.607639165 +0000 UTC m=+0.109692991 container attach 2b100e0067e4f581f65b833cc7f94a833eaf1c7812e04226f60c68fc0a3c67ec (image=quay.io/ceph/ceph:v18, name=awesome_cohen, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Nov 24 18:19:48 compute-0 podman[77657]: 2025-11-24 18:19:48.5179964 +0000 UTC m=+0.020050206 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:19:48 compute-0 ceph-mon[74927]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:19:48 compute-0 ceph-mon[74927]: Saving service mon spec with placement count:5
Nov 24 18:19:48 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:48 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:48 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:48 compute-0 podman[77749]: 2025-11-24 18:19:48.967021504 +0000 UTC m=+0.048814426 container exec 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:19:49 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:19:49 compute-0 ceph-mgr[75218]: [cephadm INFO root] Saving service crash spec with placement *
Nov 24 18:19:49 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Nov 24 18:19:49 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 24 18:19:49 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:49 compute-0 awesome_cohen[77674]: Scheduled crash update...
Nov 24 18:19:49 compute-0 systemd[1]: libpod-2b100e0067e4f581f65b833cc7f94a833eaf1c7812e04226f60c68fc0a3c67ec.scope: Deactivated successfully.
Nov 24 18:19:49 compute-0 podman[77657]: 2025-11-24 18:19:49.175044802 +0000 UTC m=+0.677098608 container died 2b100e0067e4f581f65b833cc7f94a833eaf1c7812e04226f60c68fc0a3c67ec (image=quay.io/ceph/ceph:v18, name=awesome_cohen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:19:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-f68b85bafb1193e052ead1bf48e693045ec831a1d095b51466004768e9551a5a-merged.mount: Deactivated successfully.
Nov 24 18:19:49 compute-0 podman[77657]: 2025-11-24 18:19:49.215007839 +0000 UTC m=+0.717061645 container remove 2b100e0067e4f581f65b833cc7f94a833eaf1c7812e04226f60c68fc0a3c67ec (image=quay.io/ceph/ceph:v18, name=awesome_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:19:49 compute-0 systemd[1]: libpod-conmon-2b100e0067e4f581f65b833cc7f94a833eaf1c7812e04226f60c68fc0a3c67ec.scope: Deactivated successfully.
Nov 24 18:19:49 compute-0 podman[77749]: 2025-11-24 18:19:49.250987854 +0000 UTC m=+0.332780756 container exec_died 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:19:49 compute-0 podman[77803]: 2025-11-24 18:19:49.28739125 +0000 UTC m=+0.049164335 container create 6b5d59727d8936a695934a006bc24d79ec3654acce3f91be2534cf353e441162 (image=quay.io/ceph/ceph:v18, name=angry_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:19:49 compute-0 systemd[1]: Started libpod-conmon-6b5d59727d8936a695934a006bc24d79ec3654acce3f91be2534cf353e441162.scope.
Nov 24 18:19:49 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:19:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1921e0a5c7c944d64d9f44713cbb8300270185e652e54d6e2895b5395bf2ce75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1921e0a5c7c944d64d9f44713cbb8300270185e652e54d6e2895b5395bf2ce75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1921e0a5c7c944d64d9f44713cbb8300270185e652e54d6e2895b5395bf2ce75/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:49 compute-0 podman[77803]: 2025-11-24 18:19:49.363158248 +0000 UTC m=+0.124931343 container init 6b5d59727d8936a695934a006bc24d79ec3654acce3f91be2534cf353e441162 (image=quay.io/ceph/ceph:v18, name=angry_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:19:49 compute-0 podman[77803]: 2025-11-24 18:19:49.270464385 +0000 UTC m=+0.032237490 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:19:49 compute-0 podman[77803]: 2025-11-24 18:19:49.370200399 +0000 UTC m=+0.131973484 container start 6b5d59727d8936a695934a006bc24d79ec3654acce3f91be2534cf353e441162 (image=quay.io/ceph/ceph:v18, name=angry_newton, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:19:49 compute-0 podman[77803]: 2025-11-24 18:19:49.373660078 +0000 UTC m=+0.135433163 container attach 6b5d59727d8936a695934a006bc24d79ec3654acce3f91be2534cf353e441162 (image=quay.io/ceph/ceph:v18, name=angry_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:19:49 compute-0 sudo[77623]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:49 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:19:49 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:49 compute-0 sudo[77855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:49 compute-0 sudo[77855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:49 compute-0 sudo[77855]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:49 compute-0 sudo[77880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:19:49 compute-0 sudo[77880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:49 compute-0 sudo[77880]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:49 compute-0 sudo[77905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:49 compute-0 sudo[77905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:49 compute-0 sudo[77905]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:49 compute-0 sudo[77931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 18:19:49 compute-0 sudo[77931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:49 compute-0 ceph-mon[74927]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:19:49 compute-0 ceph-mon[74927]: Saving service mgr spec with placement count:2
Nov 24 18:19:49 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:49 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:49 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Nov 24 18:19:49 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3735356326' entity='client.admin' 
Nov 24 18:19:49 compute-0 systemd[1]: libpod-6b5d59727d8936a695934a006bc24d79ec3654acce3f91be2534cf353e441162.scope: Deactivated successfully.
Nov 24 18:19:49 compute-0 podman[77803]: 2025-11-24 18:19:49.919322567 +0000 UTC m=+0.681095692 container died 6b5d59727d8936a695934a006bc24d79ec3654acce3f91be2534cf353e441162 (image=quay.io/ceph/ceph:v18, name=angry_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 18:19:49 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77989 (sysctl)
Nov 24 18:19:49 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 24 18:19:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-1921e0a5c7c944d64d9f44713cbb8300270185e652e54d6e2895b5395bf2ce75-merged.mount: Deactivated successfully.
Nov 24 18:19:49 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 24 18:19:49 compute-0 podman[77803]: 2025-11-24 18:19:49.97703256 +0000 UTC m=+0.738805645 container remove 6b5d59727d8936a695934a006bc24d79ec3654acce3f91be2534cf353e441162 (image=quay.io/ceph/ceph:v18, name=angry_newton, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:19:49 compute-0 systemd[1]: libpod-conmon-6b5d59727d8936a695934a006bc24d79ec3654acce3f91be2534cf353e441162.scope: Deactivated successfully.
Nov 24 18:19:50 compute-0 podman[78003]: 2025-11-24 18:19:50.058336281 +0000 UTC m=+0.058993718 container create ff62592166534391aee4b1c1b1f2c190e789bd13fde518b5e26e4e4827a8a122 (image=quay.io/ceph/ceph:v18, name=loving_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:19:50 compute-0 systemd[1]: Started libpod-conmon-ff62592166534391aee4b1c1b1f2c190e789bd13fde518b5e26e4e4827a8a122.scope.
Nov 24 18:19:50 compute-0 podman[78003]: 2025-11-24 18:19:50.029108629 +0000 UTC m=+0.029766126 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:19:50 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:19:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a5a7721258c7137d615515bbc15a9b45d954bc757549d82a058125bc7c01429/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a5a7721258c7137d615515bbc15a9b45d954bc757549d82a058125bc7c01429/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a5a7721258c7137d615515bbc15a9b45d954bc757549d82a058125bc7c01429/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:50 compute-0 podman[78003]: 2025-11-24 18:19:50.160289132 +0000 UTC m=+0.160946549 container init ff62592166534391aee4b1c1b1f2c190e789bd13fde518b5e26e4e4827a8a122 (image=quay.io/ceph/ceph:v18, name=loving_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:19:50 compute-0 podman[78003]: 2025-11-24 18:19:50.168534154 +0000 UTC m=+0.169191591 container start ff62592166534391aee4b1c1b1f2c190e789bd13fde518b5e26e4e4827a8a122 (image=quay.io/ceph/ceph:v18, name=loving_taussig, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 18:19:50 compute-0 podman[78003]: 2025-11-24 18:19:50.172341691 +0000 UTC m=+0.172999088 container attach ff62592166534391aee4b1c1b1f2c190e789bd13fde518b5e26e4e4827a8a122 (image=quay.io/ceph/ceph:v18, name=loving_taussig, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:19:50 compute-0 sudo[77931]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:50 compute-0 ceph-mgr[75218]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 18:19:50 compute-0 sudo[78042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:50 compute-0 sudo[78042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:50 compute-0 sudo[78042]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:50 compute-0 sudo[78086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:19:50 compute-0 sudo[78086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:50 compute-0 sudo[78086]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:50 compute-0 sudo[78111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:50 compute-0 sudo[78111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:50 compute-0 sudo[78111]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:50 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:19:50 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Nov 24 18:19:50 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:50 compute-0 systemd[1]: libpod-ff62592166534391aee4b1c1b1f2c190e789bd13fde518b5e26e4e4827a8a122.scope: Deactivated successfully.
Nov 24 18:19:50 compute-0 podman[78003]: 2025-11-24 18:19:50.701374882 +0000 UTC m=+0.702032289 container died ff62592166534391aee4b1c1b1f2c190e789bd13fde518b5e26e4e4827a8a122 (image=quay.io/ceph/ceph:v18, name=loving_taussig, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:19:50 compute-0 sudo[78136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Nov 24 18:19:50 compute-0 sudo[78136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a5a7721258c7137d615515bbc15a9b45d954bc757549d82a058125bc7c01429-merged.mount: Deactivated successfully.
Nov 24 18:19:50 compute-0 podman[78003]: 2025-11-24 18:19:50.759592399 +0000 UTC m=+0.760249836 container remove ff62592166534391aee4b1c1b1f2c190e789bd13fde518b5e26e4e4827a8a122 (image=quay.io/ceph/ceph:v18, name=loving_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 18:19:50 compute-0 systemd[1]: libpod-conmon-ff62592166534391aee4b1c1b1f2c190e789bd13fde518b5e26e4e4827a8a122.scope: Deactivated successfully.
Nov 24 18:19:50 compute-0 podman[78177]: 2025-11-24 18:19:50.850572198 +0000 UTC m=+0.060619199 container create 6b913e7083679fb3c38c2845c7c6f3f52908c74722c534c83738a110d4eb3bf3 (image=quay.io/ceph/ceph:v18, name=epic_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:19:50 compute-0 ceph-mon[74927]: from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:19:50 compute-0 ceph-mon[74927]: Saving service crash spec with placement *
Nov 24 18:19:50 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3735356326' entity='client.admin' 
Nov 24 18:19:50 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:50 compute-0 systemd[1]: Started libpod-conmon-6b913e7083679fb3c38c2845c7c6f3f52908c74722c534c83738a110d4eb3bf3.scope.
Nov 24 18:19:50 compute-0 podman[78177]: 2025-11-24 18:19:50.822180088 +0000 UTC m=+0.032227139 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:19:50 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:19:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a49c494f8967391950032d2199b06d5832319b9e16ae1f6f7de9e4b29be3f38/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a49c494f8967391950032d2199b06d5832319b9e16ae1f6f7de9e4b29be3f38/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a49c494f8967391950032d2199b06d5832319b9e16ae1f6f7de9e4b29be3f38/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:50 compute-0 podman[78177]: 2025-11-24 18:19:50.96035096 +0000 UTC m=+0.170398031 container init 6b913e7083679fb3c38c2845c7c6f3f52908c74722c534c83738a110d4eb3bf3 (image=quay.io/ceph/ceph:v18, name=epic_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 18:19:50 compute-0 podman[78177]: 2025-11-24 18:19:50.9684895 +0000 UTC m=+0.178536511 container start 6b913e7083679fb3c38c2845c7c6f3f52908c74722c534c83738a110d4eb3bf3 (image=quay.io/ceph/ceph:v18, name=epic_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:19:50 compute-0 podman[78177]: 2025-11-24 18:19:50.973030656 +0000 UTC m=+0.183077657 container attach 6b913e7083679fb3c38c2845c7c6f3f52908c74722c534c83738a110d4eb3bf3 (image=quay.io/ceph/ceph:v18, name=epic_swanson, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 18:19:50 compute-0 sudo[78136]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:50 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:19:51 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:51 compute-0 sudo[78217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:51 compute-0 sudo[78217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:51 compute-0 sudo[78217]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:51 compute-0 sudo[78242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:19:51 compute-0 sudo[78242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:51 compute-0 sudo[78242]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:51 compute-0 sudo[78267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:51 compute-0 sudo[78267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:51 compute-0 sudo[78267]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:51 compute-0 sudo[78292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- inventory --format=json-pretty --filter-for-batch
Nov 24 18:19:51 compute-0 sudo[78292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:51 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:19:51 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 24 18:19:51 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:51 compute-0 ceph-mgr[75218]: [cephadm INFO root] Added label _admin to host compute-0
Nov 24 18:19:51 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Nov 24 18:19:51 compute-0 epic_swanson[78206]: Added label _admin to host compute-0
Nov 24 18:19:51 compute-0 podman[78177]: 2025-11-24 18:19:51.537772474 +0000 UTC m=+0.747819475 container died 6b913e7083679fb3c38c2845c7c6f3f52908c74722c534c83738a110d4eb3bf3 (image=quay.io/ceph/ceph:v18, name=epic_swanson, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 18:19:51 compute-0 systemd[1]: libpod-6b913e7083679fb3c38c2845c7c6f3f52908c74722c534c83738a110d4eb3bf3.scope: Deactivated successfully.
Nov 24 18:19:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a49c494f8967391950032d2199b06d5832319b9e16ae1f6f7de9e4b29be3f38-merged.mount: Deactivated successfully.
Nov 24 18:19:51 compute-0 podman[78177]: 2025-11-24 18:19:51.611651884 +0000 UTC m=+0.821698845 container remove 6b913e7083679fb3c38c2845c7c6f3f52908c74722c534c83738a110d4eb3bf3 (image=quay.io/ceph/ceph:v18, name=epic_swanson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 24 18:19:51 compute-0 systemd[1]: libpod-conmon-6b913e7083679fb3c38c2845c7c6f3f52908c74722c534c83738a110d4eb3bf3.scope: Deactivated successfully.
Nov 24 18:19:51 compute-0 podman[78377]: 2025-11-24 18:19:51.681445098 +0000 UTC m=+0.046753303 container create f22006b2f1ed6e452bc338277cb6cb8c7f88b4bd0643164d7d922d3a22e105ca (image=quay.io/ceph/ceph:v18, name=youthful_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:19:51 compute-0 systemd[1]: Started libpod-conmon-f22006b2f1ed6e452bc338277cb6cb8c7f88b4bd0643164d7d922d3a22e105ca.scope.
Nov 24 18:19:51 compute-0 podman[78377]: 2025-11-24 18:19:51.657445801 +0000 UTC m=+0.022754026 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:19:51 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:19:51 compute-0 podman[78405]: 2025-11-24 18:19:51.759626748 +0000 UTC m=+0.045589953 container create f1bcbd1729cdba1323091035af3bebcce8f8e51322c7cb7b64bee586f59d2e5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:19:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee484dfdd6bbe71bb539a6240aced83d58e22072997bbda19c11884bd14baec1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee484dfdd6bbe71bb539a6240aced83d58e22072997bbda19c11884bd14baec1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee484dfdd6bbe71bb539a6240aced83d58e22072997bbda19c11884bd14baec1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:51 compute-0 podman[78377]: 2025-11-24 18:19:51.774450619 +0000 UTC m=+0.139758864 container init f22006b2f1ed6e452bc338277cb6cb8c7f88b4bd0643164d7d922d3a22e105ca (image=quay.io/ceph/ceph:v18, name=youthful_tharp, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:19:51 compute-0 podman[78377]: 2025-11-24 18:19:51.783587034 +0000 UTC m=+0.148895239 container start f22006b2f1ed6e452bc338277cb6cb8c7f88b4bd0643164d7d922d3a22e105ca (image=quay.io/ceph/ceph:v18, name=youthful_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 18:19:51 compute-0 podman[78377]: 2025-11-24 18:19:51.788569112 +0000 UTC m=+0.153877357 container attach f22006b2f1ed6e452bc338277cb6cb8c7f88b4bd0643164d7d922d3a22e105ca (image=quay.io/ceph/ceph:v18, name=youthful_tharp, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 18:19:51 compute-0 systemd[1]: Started libpod-conmon-f1bcbd1729cdba1323091035af3bebcce8f8e51322c7cb7b64bee586f59d2e5d.scope.
Nov 24 18:19:51 compute-0 podman[78405]: 2025-11-24 18:19:51.734867331 +0000 UTC m=+0.020830526 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:19:51 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:19:51 compute-0 podman[78405]: 2025-11-24 18:19:51.856800586 +0000 UTC m=+0.142763791 container init f1bcbd1729cdba1323091035af3bebcce8f8e51322c7cb7b64bee586f59d2e5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 24 18:19:51 compute-0 podman[78405]: 2025-11-24 18:19:51.862049601 +0000 UTC m=+0.148012776 container start f1bcbd1729cdba1323091035af3bebcce8f8e51322c7cb7b64bee586f59d2e5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_curie, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:19:51 compute-0 podman[78405]: 2025-11-24 18:19:51.865988942 +0000 UTC m=+0.151952167 container attach f1bcbd1729cdba1323091035af3bebcce8f8e51322c7cb7b64bee586f59d2e5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 18:19:51 compute-0 admiring_curie[78428]: 167 167
Nov 24 18:19:51 compute-0 systemd[1]: libpod-f1bcbd1729cdba1323091035af3bebcce8f8e51322c7cb7b64bee586f59d2e5d.scope: Deactivated successfully.
Nov 24 18:19:51 compute-0 podman[78405]: 2025-11-24 18:19:51.868006834 +0000 UTC m=+0.153970009 container died f1bcbd1729cdba1323091035af3bebcce8f8e51322c7cb7b64bee586f59d2e5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_curie, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 24 18:19:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-01cb7676017884178bff14a19dbfbc3c1124e25c872722f06e49fe4c3ca9491a-merged.mount: Deactivated successfully.
Nov 24 18:19:51 compute-0 podman[78405]: 2025-11-24 18:19:51.913533555 +0000 UTC m=+0.199496750 container remove f1bcbd1729cdba1323091035af3bebcce8f8e51322c7cb7b64bee586f59d2e5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_curie, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:19:51 compute-0 systemd[1]: libpod-conmon-f1bcbd1729cdba1323091035af3bebcce8f8e51322c7cb7b64bee586f59d2e5d.scope: Deactivated successfully.
Nov 24 18:19:52 compute-0 ceph-mon[74927]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:19:52 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:52 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Nov 24 18:19:52 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1867413685' entity='client.admin' 
Nov 24 18:19:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:19:52 compute-0 ceph-mgr[75218]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 18:19:52 compute-0 systemd[1]: libpod-f22006b2f1ed6e452bc338277cb6cb8c7f88b4bd0643164d7d922d3a22e105ca.scope: Deactivated successfully.
Nov 24 18:19:52 compute-0 podman[78377]: 2025-11-24 18:19:52.477006681 +0000 UTC m=+0.842314886 container died f22006b2f1ed6e452bc338277cb6cb8c7f88b4bd0643164d7d922d3a22e105ca (image=quay.io/ceph/ceph:v18, name=youthful_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 24 18:19:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee484dfdd6bbe71bb539a6240aced83d58e22072997bbda19c11884bd14baec1-merged.mount: Deactivated successfully.
Nov 24 18:19:52 compute-0 podman[78377]: 2025-11-24 18:19:52.527656343 +0000 UTC m=+0.892964558 container remove f22006b2f1ed6e452bc338277cb6cb8c7f88b4bd0643164d7d922d3a22e105ca (image=quay.io/ceph/ceph:v18, name=youthful_tharp, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:19:52 compute-0 systemd[1]: libpod-conmon-f22006b2f1ed6e452bc338277cb6cb8c7f88b4bd0643164d7d922d3a22e105ca.scope: Deactivated successfully.
Nov 24 18:19:52 compute-0 podman[78477]: 2025-11-24 18:19:52.623408745 +0000 UTC m=+0.071121390 container create 0e2531a51d1b08ef3228c7e94afcba267208da57b990da46e842cd7dca3f90e2 (image=quay.io/ceph/ceph:v18, name=great_kilby, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:19:52 compute-0 systemd[1]: Started libpod-conmon-0e2531a51d1b08ef3228c7e94afcba267208da57b990da46e842cd7dca3f90e2.scope.
Nov 24 18:19:52 compute-0 podman[78477]: 2025-11-24 18:19:52.594340278 +0000 UTC m=+0.042052983 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:19:52 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:19:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae09d0eee0691ec9aa43b58051d76c60e793ac89c406d1031099fdd7c4b3c418/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae09d0eee0691ec9aa43b58051d76c60e793ac89c406d1031099fdd7c4b3c418/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae09d0eee0691ec9aa43b58051d76c60e793ac89c406d1031099fdd7c4b3c418/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:52 compute-0 podman[78477]: 2025-11-24 18:19:52.714796904 +0000 UTC m=+0.162509529 container init 0e2531a51d1b08ef3228c7e94afcba267208da57b990da46e842cd7dca3f90e2 (image=quay.io/ceph/ceph:v18, name=great_kilby, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:19:52 compute-0 podman[78477]: 2025-11-24 18:19:52.724200946 +0000 UTC m=+0.171913551 container start 0e2531a51d1b08ef3228c7e94afcba267208da57b990da46e842cd7dca3f90e2 (image=quay.io/ceph/ceph:v18, name=great_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:19:52 compute-0 podman[78477]: 2025-11-24 18:19:52.727973273 +0000 UTC m=+0.175685878 container attach 0e2531a51d1b08ef3228c7e94afcba267208da57b990da46e842cd7dca3f90e2 (image=quay.io/ceph/ceph:v18, name=great_kilby, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:19:53 compute-0 ceph-mon[74927]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:19:53 compute-0 ceph-mon[74927]: Added label _admin to host compute-0
Nov 24 18:19:53 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1867413685' entity='client.admin' 
Nov 24 18:19:53 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Nov 24 18:19:53 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2903664547' entity='client.admin' 
Nov 24 18:19:53 compute-0 great_kilby[78493]: set mgr/dashboard/cluster/status
Nov 24 18:19:53 compute-0 systemd[1]: libpod-0e2531a51d1b08ef3228c7e94afcba267208da57b990da46e842cd7dca3f90e2.scope: Deactivated successfully.
Nov 24 18:19:53 compute-0 podman[78477]: 2025-11-24 18:19:53.388915335 +0000 UTC m=+0.836627940 container died 0e2531a51d1b08ef3228c7e94afcba267208da57b990da46e842cd7dca3f90e2 (image=quay.io/ceph/ceph:v18, name=great_kilby, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:19:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae09d0eee0691ec9aa43b58051d76c60e793ac89c406d1031099fdd7c4b3c418-merged.mount: Deactivated successfully.
Nov 24 18:19:53 compute-0 podman[78477]: 2025-11-24 18:19:53.425642819 +0000 UTC m=+0.873355424 container remove 0e2531a51d1b08ef3228c7e94afcba267208da57b990da46e842cd7dca3f90e2 (image=quay.io/ceph/ceph:v18, name=great_kilby, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:19:53 compute-0 systemd[1]: libpod-conmon-0e2531a51d1b08ef3228c7e94afcba267208da57b990da46e842cd7dca3f90e2.scope: Deactivated successfully.
Nov 24 18:19:53 compute-0 sudo[73918]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:53 compute-0 podman[78540]: 2025-11-24 18:19:53.55636766 +0000 UTC m=+0.035214896 container create 83223adfc10d943e67f60e282e6d29afc3442af328186165844bf5b0142580d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 18:19:53 compute-0 systemd[1]: Started libpod-conmon-83223adfc10d943e67f60e282e6d29afc3442af328186165844bf5b0142580d0.scope.
Nov 24 18:19:53 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:19:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7549dae49b8e0587ea340b297ba987b00943c40a7b201c2aa5730103053632d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7549dae49b8e0587ea340b297ba987b00943c40a7b201c2aa5730103053632d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7549dae49b8e0587ea340b297ba987b00943c40a7b201c2aa5730103053632d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7549dae49b8e0587ea340b297ba987b00943c40a7b201c2aa5730103053632d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:53 compute-0 podman[78540]: 2025-11-24 18:19:53.540235766 +0000 UTC m=+0.019083052 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:19:53 compute-0 podman[78540]: 2025-11-24 18:19:53.642588317 +0000 UTC m=+0.121435543 container init 83223adfc10d943e67f60e282e6d29afc3442af328186165844bf5b0142580d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chatelet, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:19:53 compute-0 podman[78540]: 2025-11-24 18:19:53.648366435 +0000 UTC m=+0.127213661 container start 83223adfc10d943e67f60e282e6d29afc3442af328186165844bf5b0142580d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chatelet, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:19:53 compute-0 podman[78540]: 2025-11-24 18:19:53.651098596 +0000 UTC m=+0.129945812 container attach 83223adfc10d943e67f60e282e6d29afc3442af328186165844bf5b0142580d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chatelet, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 18:19:53 compute-0 sudo[78584]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewkmsprazxyfqjbcefanrjhhzxmnlrfa ; /usr/bin/python3'
Nov 24 18:19:53 compute-0 sudo[78584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:19:53 compute-0 python3[78586]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:19:53 compute-0 podman[78587]: 2025-11-24 18:19:53.944491679 +0000 UTC m=+0.037168127 container create a08f139dfee75b0aedd6a9d48bafa7d89e032d4aa6101c09934607b15e1ac5b9 (image=quay.io/ceph/ceph:v18, name=festive_meitner, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 18:19:53 compute-0 systemd[1]: Started libpod-conmon-a08f139dfee75b0aedd6a9d48bafa7d89e032d4aa6101c09934607b15e1ac5b9.scope.
Nov 24 18:19:53 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:19:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/424b445c652075fc2e8f4f194858af6784d94dc29906fc5d36f9f4533acb11f2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/424b445c652075fc2e8f4f194858af6784d94dc29906fc5d36f9f4533acb11f2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:54 compute-0 podman[78587]: 2025-11-24 18:19:54.019055135 +0000 UTC m=+0.111731583 container init a08f139dfee75b0aedd6a9d48bafa7d89e032d4aa6101c09934607b15e1ac5b9 (image=quay.io/ceph/ceph:v18, name=festive_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 18:19:54 compute-0 podman[78587]: 2025-11-24 18:19:53.927010639 +0000 UTC m=+0.019687087 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:19:54 compute-0 podman[78587]: 2025-11-24 18:19:54.028815436 +0000 UTC m=+0.121491884 container start a08f139dfee75b0aedd6a9d48bafa7d89e032d4aa6101c09934607b15e1ac5b9 (image=quay.io/ceph/ceph:v18, name=festive_meitner, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 18:19:54 compute-0 podman[78587]: 2025-11-24 18:19:54.032724117 +0000 UTC m=+0.125400585 container attach a08f139dfee75b0aedd6a9d48bafa7d89e032d4aa6101c09934607b15e1ac5b9 (image=quay.io/ceph/ceph:v18, name=festive_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:19:54 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2903664547' entity='client.admin' 
Nov 24 18:19:54 compute-0 ceph-mgr[75218]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Nov 24 18:19:54 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:19:54 compute-0 ceph-mon[74927]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 24 18:19:54 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Nov 24 18:19:54 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2240054925' entity='client.admin' 
Nov 24 18:19:54 compute-0 podman[78587]: 2025-11-24 18:19:54.576188669 +0000 UTC m=+0.668865117 container died a08f139dfee75b0aedd6a9d48bafa7d89e032d4aa6101c09934607b15e1ac5b9 (image=quay.io/ceph/ceph:v18, name=festive_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 18:19:54 compute-0 systemd[1]: libpod-a08f139dfee75b0aedd6a9d48bafa7d89e032d4aa6101c09934607b15e1ac5b9.scope: Deactivated successfully.
Nov 24 18:19:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-424b445c652075fc2e8f4f194858af6784d94dc29906fc5d36f9f4533acb11f2-merged.mount: Deactivated successfully.
Nov 24 18:19:54 compute-0 podman[78587]: 2025-11-24 18:19:54.61940755 +0000 UTC m=+0.712083988 container remove a08f139dfee75b0aedd6a9d48bafa7d89e032d4aa6101c09934607b15e1ac5b9 (image=quay.io/ceph/ceph:v18, name=festive_meitner, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:19:54 compute-0 systemd[1]: libpod-conmon-a08f139dfee75b0aedd6a9d48bafa7d89e032d4aa6101c09934607b15e1ac5b9.scope: Deactivated successfully.
Nov 24 18:19:54 compute-0 sudo[78584]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:54 compute-0 elegant_chatelet[78556]: [
Nov 24 18:19:54 compute-0 elegant_chatelet[78556]:     {
Nov 24 18:19:54 compute-0 elegant_chatelet[78556]:         "available": false,
Nov 24 18:19:54 compute-0 elegant_chatelet[78556]:         "ceph_device": false,
Nov 24 18:19:54 compute-0 elegant_chatelet[78556]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 24 18:19:54 compute-0 elegant_chatelet[78556]:         "lsm_data": {},
Nov 24 18:19:54 compute-0 elegant_chatelet[78556]:         "lvs": [],
Nov 24 18:19:54 compute-0 elegant_chatelet[78556]:         "path": "/dev/sr0",
Nov 24 18:19:54 compute-0 elegant_chatelet[78556]:         "rejected_reasons": [
Nov 24 18:19:54 compute-0 elegant_chatelet[78556]:             "Insufficient space (<5GB)",
Nov 24 18:19:54 compute-0 elegant_chatelet[78556]:             "Has a FileSystem"
Nov 24 18:19:54 compute-0 elegant_chatelet[78556]:         ],
Nov 24 18:19:54 compute-0 elegant_chatelet[78556]:         "sys_api": {
Nov 24 18:19:54 compute-0 elegant_chatelet[78556]:             "actuators": null,
Nov 24 18:19:54 compute-0 elegant_chatelet[78556]:             "device_nodes": "sr0",
Nov 24 18:19:54 compute-0 elegant_chatelet[78556]:             "devname": "sr0",
Nov 24 18:19:54 compute-0 elegant_chatelet[78556]:             "human_readable_size": "482.00 KB",
Nov 24 18:19:54 compute-0 elegant_chatelet[78556]:             "id_bus": "ata",
Nov 24 18:19:54 compute-0 elegant_chatelet[78556]:             "model": "QEMU DVD-ROM",
Nov 24 18:19:54 compute-0 elegant_chatelet[78556]:             "nr_requests": "2",
Nov 24 18:19:54 compute-0 elegant_chatelet[78556]:             "parent": "/dev/sr0",
Nov 24 18:19:54 compute-0 elegant_chatelet[78556]:             "partitions": {},
Nov 24 18:19:54 compute-0 elegant_chatelet[78556]:             "path": "/dev/sr0",
Nov 24 18:19:54 compute-0 elegant_chatelet[78556]:             "removable": "1",
Nov 24 18:19:54 compute-0 elegant_chatelet[78556]:             "rev": "2.5+",
Nov 24 18:19:54 compute-0 elegant_chatelet[78556]:             "ro": "0",
Nov 24 18:19:54 compute-0 elegant_chatelet[78556]:             "rotational": "1",
Nov 24 18:19:54 compute-0 elegant_chatelet[78556]:             "sas_address": "",
Nov 24 18:19:54 compute-0 elegant_chatelet[78556]:             "sas_device_handle": "",
Nov 24 18:19:54 compute-0 elegant_chatelet[78556]:             "scheduler_mode": "mq-deadline",
Nov 24 18:19:54 compute-0 elegant_chatelet[78556]:             "sectors": 0,
Nov 24 18:19:54 compute-0 elegant_chatelet[78556]:             "sectorsize": "2048",
Nov 24 18:19:54 compute-0 elegant_chatelet[78556]:             "size": 493568.0,
Nov 24 18:19:54 compute-0 elegant_chatelet[78556]:             "support_discard": "2048",
Nov 24 18:19:54 compute-0 elegant_chatelet[78556]:             "type": "disk",
Nov 24 18:19:54 compute-0 elegant_chatelet[78556]:             "vendor": "QEMU"
Nov 24 18:19:54 compute-0 elegant_chatelet[78556]:         }
Nov 24 18:19:54 compute-0 elegant_chatelet[78556]:     }
Nov 24 18:19:54 compute-0 elegant_chatelet[78556]: ]
Nov 24 18:19:54 compute-0 podman[78540]: 2025-11-24 18:19:54.987149614 +0000 UTC m=+1.465996840 container died 83223adfc10d943e67f60e282e6d29afc3442af328186165844bf5b0142580d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:19:54 compute-0 systemd[1]: libpod-83223adfc10d943e67f60e282e6d29afc3442af328186165844bf5b0142580d0.scope: Deactivated successfully.
Nov 24 18:19:54 compute-0 systemd[1]: libpod-83223adfc10d943e67f60e282e6d29afc3442af328186165844bf5b0142580d0.scope: Consumed 1.347s CPU time.
Nov 24 18:19:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7549dae49b8e0587ea340b297ba987b00943c40a7b201c2aa5730103053632d-merged.mount: Deactivated successfully.
Nov 24 18:19:55 compute-0 podman[78540]: 2025-11-24 18:19:55.043913684 +0000 UTC m=+1.522760910 container remove 83223adfc10d943e67f60e282e6d29afc3442af328186165844bf5b0142580d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:19:55 compute-0 systemd[1]: libpod-conmon-83223adfc10d943e67f60e282e6d29afc3442af328186165844bf5b0142580d0.scope: Deactivated successfully.
Nov 24 18:19:55 compute-0 sudo[78292]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:55 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:19:55 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:55 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:19:55 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:55 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:19:55 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:55 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:19:55 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:55 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 24 18:19:55 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 18:19:55 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:19:55 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:19:55 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:19:55 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:19:55 compute-0 ceph-mgr[75218]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 24 18:19:55 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 24 18:19:55 compute-0 sudo[80272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:55 compute-0 sudo[80272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:55 compute-0 sudo[80272]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:55 compute-0 sudo[80297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 24 18:19:55 compute-0 sudo[80297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:55 compute-0 sudo[80297]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:55 compute-0 ceph-mon[74927]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:19:55 compute-0 ceph-mon[74927]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 24 18:19:55 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2240054925' entity='client.admin' 
Nov 24 18:19:55 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:55 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:55 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:55 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:19:55 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 18:19:55 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:19:55 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:19:55 compute-0 sudo[80343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:55 compute-0 sudo[80343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:55 compute-0 sudo[80343]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:55 compute-0 sudo[80394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e5ee928f-099b-569b-93c9-ecf025cbb50d/etc/ceph
Nov 24 18:19:55 compute-0 sudo[80394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:55 compute-0 sudo[80394]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:55 compute-0 sudo[80442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzevuiupqeiyydwnujmlczamqjrkhequ ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764008394.9836938-36683-29946511160405/async_wrapper.py j620571266268 30 /home/zuul/.ansible/tmp/ansible-tmp-1764008394.9836938-36683-29946511160405/AnsiballZ_command.py _'
Nov 24 18:19:55 compute-0 sudo[80442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:19:55 compute-0 sudo[80447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:55 compute-0 sudo[80447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:55 compute-0 sudo[80447]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:55 compute-0 ansible-async_wrapper.py[80446]: Invoked with j620571266268 30 /home/zuul/.ansible/tmp/ansible-tmp-1764008394.9836938-36683-29946511160405/AnsiballZ_command.py _
Nov 24 18:19:55 compute-0 ansible-async_wrapper.py[80498]: Starting module and watcher
Nov 24 18:19:55 compute-0 sudo[80472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e5ee928f-099b-569b-93c9-ecf025cbb50d/etc/ceph/ceph.conf.new
Nov 24 18:19:55 compute-0 ansible-async_wrapper.py[80498]: Start watching 80499 (30)
Nov 24 18:19:55 compute-0 ansible-async_wrapper.py[80499]: Start module (80499)
Nov 24 18:19:55 compute-0 sudo[80472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:55 compute-0 ansible-async_wrapper.py[80446]: Return async_wrapper task started.
Nov 24 18:19:55 compute-0 sudo[80472]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:55 compute-0 sudo[80442]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:55 compute-0 sudo[80502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:55 compute-0 sudo[80502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:55 compute-0 sudo[80502]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:55 compute-0 sudo[80527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e5ee928f-099b-569b-93c9-ecf025cbb50d
Nov 24 18:19:55 compute-0 sudo[80527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:55 compute-0 sudo[80527]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:55 compute-0 python3[80501]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:19:55 compute-0 sudo[80552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:55 compute-0 sudo[80552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:55 compute-0 sudo[80552]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:55 compute-0 podman[80575]: 2025-11-24 18:19:55.993030553 +0000 UTC m=+0.054630015 container create 96017e097dea415a900c63b9111d9dfd0bf659062706ce9e00b4fda9df3c45f2 (image=quay.io/ceph/ceph:v18, name=recursing_dhawan, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:19:56 compute-0 systemd[1]: Started libpod-conmon-96017e097dea415a900c63b9111d9dfd0bf659062706ce9e00b4fda9df3c45f2.scope.
Nov 24 18:19:56 compute-0 sudo[80586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e5ee928f-099b-569b-93c9-ecf025cbb50d/etc/ceph/ceph.conf.new
Nov 24 18:19:56 compute-0 sudo[80586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:56 compute-0 sudo[80586]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:56 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:19:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64f114bfae53f61884629050bc58776c095c866d2ef2c531d72dd12e643fbd86/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64f114bfae53f61884629050bc58776c095c866d2ef2c531d72dd12e643fbd86/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:56 compute-0 podman[80575]: 2025-11-24 18:19:55.971242503 +0000 UTC m=+0.032841975 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:19:56 compute-0 podman[80575]: 2025-11-24 18:19:56.082789781 +0000 UTC m=+0.144389343 container init 96017e097dea415a900c63b9111d9dfd0bf659062706ce9e00b4fda9df3c45f2 (image=quay.io/ceph/ceph:v18, name=recursing_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:19:56 compute-0 podman[80575]: 2025-11-24 18:19:56.092596783 +0000 UTC m=+0.154196275 container start 96017e097dea415a900c63b9111d9dfd0bf659062706ce9e00b4fda9df3c45f2 (image=quay.io/ceph/ceph:v18, name=recursing_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:19:56 compute-0 podman[80575]: 2025-11-24 18:19:56.096259207 +0000 UTC m=+0.157858679 container attach 96017e097dea415a900c63b9111d9dfd0bf659062706ce9e00b4fda9df3c45f2 (image=quay.io/ceph/ceph:v18, name=recursing_dhawan, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 18:19:56 compute-0 sudo[80644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:56 compute-0 sudo[80644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:56 compute-0 sudo[80644]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:56 compute-0 sudo[80669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-e5ee928f-099b-569b-93c9-ecf025cbb50d/etc/ceph/ceph.conf.new
Nov 24 18:19:56 compute-0 sudo[80669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:56 compute-0 sudo[80669]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:56 compute-0 sudo[80694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:56 compute-0 sudo[80694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:56 compute-0 sudo[80694]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:56 compute-0 ceph-mon[74927]: Updating compute-0:/etc/ceph/ceph.conf
Nov 24 18:19:56 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:19:56 compute-0 sudo[80719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e5ee928f-099b-569b-93c9-ecf025cbb50d/etc/ceph/ceph.conf.new
Nov 24 18:19:56 compute-0 sudo[80719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:56 compute-0 sudo[80719]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:56 compute-0 sudo[80763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:56 compute-0 sudo[80763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:56 compute-0 sudo[80763]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:56 compute-0 sudo[80788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-e5ee928f-099b-569b-93c9-ecf025cbb50d/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Nov 24 18:19:56 compute-0 sudo[80788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:56 compute-0 sudo[80788]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:56 compute-0 ceph-mgr[75218]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/config/ceph.conf
Nov 24 18:19:56 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/config/ceph.conf
Nov 24 18:19:56 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 18:19:56 compute-0 recursing_dhawan[80616]: 
Nov 24 18:19:56 compute-0 recursing_dhawan[80616]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 24 18:19:56 compute-0 systemd[1]: libpod-96017e097dea415a900c63b9111d9dfd0bf659062706ce9e00b4fda9df3c45f2.scope: Deactivated successfully.
Nov 24 18:19:56 compute-0 podman[80575]: 2025-11-24 18:19:56.704368911 +0000 UTC m=+0.765968423 container died 96017e097dea415a900c63b9111d9dfd0bf659062706ce9e00b4fda9df3c45f2 (image=quay.io/ceph/ceph:v18, name=recursing_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:19:56 compute-0 sudo[80813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:56 compute-0 sudo[80813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:56 compute-0 sudo[80813]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-64f114bfae53f61884629050bc58776c095c866d2ef2c531d72dd12e643fbd86-merged.mount: Deactivated successfully.
Nov 24 18:19:56 compute-0 podman[80575]: 2025-11-24 18:19:56.754846629 +0000 UTC m=+0.816446101 container remove 96017e097dea415a900c63b9111d9dfd0bf659062706ce9e00b4fda9df3c45f2 (image=quay.io/ceph/ceph:v18, name=recursing_dhawan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:19:56 compute-0 systemd[1]: libpod-conmon-96017e097dea415a900c63b9111d9dfd0bf659062706ce9e00b4fda9df3c45f2.scope: Deactivated successfully.
Nov 24 18:19:56 compute-0 ansible-async_wrapper.py[80499]: Module complete (80499)
Nov 24 18:19:56 compute-0 sudo[80851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/config
Nov 24 18:19:56 compute-0 sudo[80851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:56 compute-0 sudo[80851]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:56 compute-0 sudo[80899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:56 compute-0 sudo[80899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:56 compute-0 sudo[80899]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:56 compute-0 sudo[80924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e5ee928f-099b-569b-93c9-ecf025cbb50d/var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/config
Nov 24 18:19:56 compute-0 sudo[80924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:56 compute-0 sudo[80924]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:56 compute-0 sudo[80949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:56 compute-0 sudo[80949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:56 compute-0 sudo[80949]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:57 compute-0 sudo[80998]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmdzovujnvsgowkihrrqjuhsausiwinf ; /usr/bin/python3'
Nov 24 18:19:57 compute-0 sudo[80998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:19:57 compute-0 sudo[80997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e5ee928f-099b-569b-93c9-ecf025cbb50d/var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/config/ceph.conf.new
Nov 24 18:19:57 compute-0 sudo[80997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:57 compute-0 sudo[80997]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:57 compute-0 python3[81011]: ansible-ansible.legacy.async_status Invoked with jid=j620571266268.80446 mode=status _async_dir=/root/.ansible_async
Nov 24 18:19:57 compute-0 sudo[81025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:57 compute-0 sudo[80998]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:57 compute-0 sudo[81025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:57 compute-0 sudo[81025]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:57 compute-0 sudo[81050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e5ee928f-099b-569b-93c9-ecf025cbb50d
Nov 24 18:19:57 compute-0 sudo[81050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:57 compute-0 sudo[81050]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:57 compute-0 sudo[81141]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umxwhhkzhepzdorxnyqfmgnjhqtsakvl ; /usr/bin/python3'
Nov 24 18:19:57 compute-0 sudo[81141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:19:57 compute-0 sudo[81102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:57 compute-0 sudo[81102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:57 compute-0 sudo[81102]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:57 compute-0 sudo[81149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e5ee928f-099b-569b-93c9-ecf025cbb50d/var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/config/ceph.conf.new
Nov 24 18:19:57 compute-0 sudo[81149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:57 compute-0 sudo[81149]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:57 compute-0 ceph-mon[74927]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:19:57 compute-0 ceph-mon[74927]: Updating compute-0:/var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/config/ceph.conf
Nov 24 18:19:57 compute-0 ceph-mon[74927]: from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 18:19:57 compute-0 python3[81147]: ansible-ansible.legacy.async_status Invoked with jid=j620571266268.80446 mode=cleanup _async_dir=/root/.ansible_async
Nov 24 18:19:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:19:57 compute-0 sudo[81141]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:57 compute-0 sudo[81197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:57 compute-0 sudo[81197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:57 compute-0 sudo[81197]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:57 compute-0 sudo[81222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-e5ee928f-099b-569b-93c9-ecf025cbb50d/var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/config/ceph.conf.new
Nov 24 18:19:57 compute-0 sudo[81222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:57 compute-0 sudo[81222]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:57 compute-0 sudo[81247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:57 compute-0 sudo[81247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:57 compute-0 sudo[81247]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:57 compute-0 sudo[81272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e5ee928f-099b-569b-93c9-ecf025cbb50d/var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/config/ceph.conf.new
Nov 24 18:19:57 compute-0 sudo[81272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:57 compute-0 sudo[81272]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:57 compute-0 sudo[81340]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aogpsvzpfutfvadprawtfqvwpsbwguny ; /usr/bin/python3'
Nov 24 18:19:57 compute-0 sudo[81340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:19:57 compute-0 sudo[81302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:57 compute-0 sudo[81302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:57 compute-0 sudo[81302]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:57 compute-0 sudo[81348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-e5ee928f-099b-569b-93c9-ecf025cbb50d/var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/config/ceph.conf.new /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/config/ceph.conf
Nov 24 18:19:57 compute-0 sudo[81348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:57 compute-0 sudo[81348]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:57 compute-0 ceph-mgr[75218]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 24 18:19:57 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 24 18:19:57 compute-0 python3[81345]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 24 18:19:57 compute-0 sudo[81340]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:57 compute-0 sudo[81373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:57 compute-0 sudo[81373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:57 compute-0 sudo[81373]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:57 compute-0 sudo[81400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 24 18:19:57 compute-0 sudo[81400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:58 compute-0 sudo[81400]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:58 compute-0 sudo[81425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:58 compute-0 sudo[81425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:58 compute-0 sudo[81425]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:58 compute-0 sudo[81450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e5ee928f-099b-569b-93c9-ecf025cbb50d/etc/ceph
Nov 24 18:19:58 compute-0 sudo[81450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:58 compute-0 sudo[81450]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:58 compute-0 sudo[81519]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgpmxibzafxdevdcvnkbtuuszymzkxkb ; /usr/bin/python3'
Nov 24 18:19:58 compute-0 sudo[81519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:19:58 compute-0 sudo[81478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:58 compute-0 sudo[81478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:58 compute-0 sudo[81478]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:58 compute-0 sudo[81526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e5ee928f-099b-569b-93c9-ecf025cbb50d/etc/ceph/ceph.client.admin.keyring.new
Nov 24 18:19:58 compute-0 sudo[81526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:58 compute-0 sudo[81526]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:58 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:19:59 compute-0 ceph-mon[74927]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 24 18:19:59 compute-0 sudo[81551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:59 compute-0 sudo[81551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:59 compute-0 sudo[81551]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:59 compute-0 python3[81523]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:19:59 compute-0 sudo[81576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e5ee928f-099b-569b-93c9-ecf025cbb50d
Nov 24 18:19:59 compute-0 sudo[81576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:59 compute-0 sudo[81576]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:59 compute-0 podman[81600]: 2025-11-24 18:19:59.20312492 +0000 UTC m=+0.039858605 container create 86e29a39f4d8f3daa52ecc9db10ab0c5991b7de3ba8717962b07b5c6649fc2fa (image=quay.io/ceph/ceph:v18, name=eloquent_hofstadter, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:19:59 compute-0 sudo[81606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:59 compute-0 sudo[81606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:59 compute-0 sudo[81606]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:59 compute-0 systemd[1]: Started libpod-conmon-86e29a39f4d8f3daa52ecc9db10ab0c5991b7de3ba8717962b07b5c6649fc2fa.scope.
Nov 24 18:19:59 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:19:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/404b13da7f582898f1551392d420569ac4d41dd47626b522da05a27a1baf3a0d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/404b13da7f582898f1551392d420569ac4d41dd47626b522da05a27a1baf3a0d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/404b13da7f582898f1551392d420569ac4d41dd47626b522da05a27a1baf3a0d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:19:59 compute-0 sudo[81641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e5ee928f-099b-569b-93c9-ecf025cbb50d/etc/ceph/ceph.client.admin.keyring.new
Nov 24 18:19:59 compute-0 podman[81600]: 2025-11-24 18:19:59.186948035 +0000 UTC m=+0.023681750 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:19:59 compute-0 sudo[81641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:59 compute-0 sudo[81641]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:59 compute-0 podman[81600]: 2025-11-24 18:19:59.368959224 +0000 UTC m=+0.205692959 container init 86e29a39f4d8f3daa52ecc9db10ab0c5991b7de3ba8717962b07b5c6649fc2fa (image=quay.io/ceph/ceph:v18, name=eloquent_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 18:19:59 compute-0 podman[81600]: 2025-11-24 18:19:59.374292031 +0000 UTC m=+0.211025726 container start 86e29a39f4d8f3daa52ecc9db10ab0c5991b7de3ba8717962b07b5c6649fc2fa (image=quay.io/ceph/ceph:v18, name=eloquent_hofstadter, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 24 18:19:59 compute-0 sudo[81693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:59 compute-0 sudo[81693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:59 compute-0 podman[81600]: 2025-11-24 18:19:59.378678414 +0000 UTC m=+0.215412109 container attach 86e29a39f4d8f3daa52ecc9db10ab0c5991b7de3ba8717962b07b5c6649fc2fa (image=quay.io/ceph/ceph:v18, name=eloquent_hofstadter, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:19:59 compute-0 sudo[81693]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:59 compute-0 sudo[81719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-e5ee928f-099b-569b-93c9-ecf025cbb50d/etc/ceph/ceph.client.admin.keyring.new
Nov 24 18:19:59 compute-0 sudo[81719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:59 compute-0 sudo[81719]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:59 compute-0 sudo[81744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:59 compute-0 sudo[81744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:59 compute-0 sudo[81744]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:59 compute-0 sudo[81769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-e5ee928f-099b-569b-93c9-ecf025cbb50d/etc/ceph/ceph.client.admin.keyring.new
Nov 24 18:19:59 compute-0 sudo[81769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:59 compute-0 sudo[81769]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:59 compute-0 sudo[81794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:59 compute-0 sudo[81794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:59 compute-0 sudo[81794]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:59 compute-0 sudo[81819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-e5ee928f-099b-569b-93c9-ecf025cbb50d/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Nov 24 18:19:59 compute-0 sudo[81819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:59 compute-0 sudo[81819]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:59 compute-0 ceph-mgr[75218]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/config/ceph.client.admin.keyring
Nov 24 18:19:59 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/config/ceph.client.admin.keyring
Nov 24 18:19:59 compute-0 sudo[81863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:59 compute-0 sudo[81863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:59 compute-0 sudo[81863]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:59 compute-0 sudo[81888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/config
Nov 24 18:19:59 compute-0 sudo[81888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:59 compute-0 sudo[81888]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:59 compute-0 sudo[81913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:59 compute-0 sudo[81913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:59 compute-0 sudo[81913]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:59 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 18:19:59 compute-0 eloquent_hofstadter[81650]: 
Nov 24 18:19:59 compute-0 eloquent_hofstadter[81650]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 24 18:19:59 compute-0 systemd[1]: libpod-86e29a39f4d8f3daa52ecc9db10ab0c5991b7de3ba8717962b07b5c6649fc2fa.scope: Deactivated successfully.
Nov 24 18:19:59 compute-0 podman[81600]: 2025-11-24 18:19:59.912793645 +0000 UTC m=+0.749527330 container died 86e29a39f4d8f3daa52ecc9db10ab0c5991b7de3ba8717962b07b5c6649fc2fa (image=quay.io/ceph/ceph:v18, name=eloquent_hofstadter, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 18:19:59 compute-0 sudo[81938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e5ee928f-099b-569b-93c9-ecf025cbb50d/var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/config
Nov 24 18:19:59 compute-0 sudo[81938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:59 compute-0 sudo[81938]: pam_unix(sudo:session): session closed for user root
Nov 24 18:19:59 compute-0 sudo[81976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:19:59 compute-0 sudo[81976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:19:59 compute-0 sudo[81976]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:00 compute-0 sudo[82002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e5ee928f-099b-569b-93c9-ecf025cbb50d/var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/config/ceph.client.admin.keyring.new
Nov 24 18:20:00 compute-0 sudo[82002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:00 compute-0 sudo[82002]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-404b13da7f582898f1551392d420569ac4d41dd47626b522da05a27a1baf3a0d-merged.mount: Deactivated successfully.
Nov 24 18:20:00 compute-0 ceph-mon[74927]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:00 compute-0 podman[81600]: 2025-11-24 18:20:00.084406877 +0000 UTC m=+0.921140582 container remove 86e29a39f4d8f3daa52ecc9db10ab0c5991b7de3ba8717962b07b5c6649fc2fa (image=quay.io/ceph/ceph:v18, name=eloquent_hofstadter, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:20:00 compute-0 systemd[1]: libpod-conmon-86e29a39f4d8f3daa52ecc9db10ab0c5991b7de3ba8717962b07b5c6649fc2fa.scope: Deactivated successfully.
Nov 24 18:20:00 compute-0 sudo[82027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:00 compute-0 sudo[82027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:00 compute-0 sudo[82027]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:00 compute-0 sudo[81519]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:00 compute-0 sudo[82052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e5ee928f-099b-569b-93c9-ecf025cbb50d
Nov 24 18:20:00 compute-0 sudo[82052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:00 compute-0 sudo[82052]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:00 compute-0 sudo[82077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:00 compute-0 sudo[82077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:00 compute-0 sudo[82077]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:00 compute-0 sudo[82102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e5ee928f-099b-569b-93c9-ecf025cbb50d/var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/config/ceph.client.admin.keyring.new
Nov 24 18:20:00 compute-0 sudo[82102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:00 compute-0 sudo[82102]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:00 compute-0 sudo[82179]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbsbpcpuvcsduxcvqnnnnygahvtyovhm ; /usr/bin/python3'
Nov 24 18:20:00 compute-0 sudo[82179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:20:00 compute-0 sudo[82170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:00 compute-0 sudo[82170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:00 compute-0 sudo[82170]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:00 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:00 compute-0 sudo[82201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-e5ee928f-099b-569b-93c9-ecf025cbb50d/var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/config/ceph.client.admin.keyring.new
Nov 24 18:20:00 compute-0 sudo[82201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:00 compute-0 sudo[82201]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:00 compute-0 python3[82196]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:20:00 compute-0 sudo[82226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:00 compute-0 sudo[82226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:00 compute-0 sudo[82226]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:00 compute-0 podman[82244]: 2025-11-24 18:20:00.581102267 +0000 UTC m=+0.055092548 container create 3e7e41fe54c5b1c7599cad53f7e8788c97b5ed1b66595f7ecc3e2850dce8f889 (image=quay.io/ceph/ceph:v18, name=interesting_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 18:20:00 compute-0 sudo[82264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-e5ee928f-099b-569b-93c9-ecf025cbb50d/var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/config/ceph.client.admin.keyring.new
Nov 24 18:20:00 compute-0 sudo[82264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:00 compute-0 sudo[82264]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:00 compute-0 systemd[1]: Started libpod-conmon-3e7e41fe54c5b1c7599cad53f7e8788c97b5ed1b66595f7ecc3e2850dce8f889.scope.
Nov 24 18:20:00 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:20:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8154eaf8899a373e75c21ccd2cf89316033ae49ed89b25271ba54bb81b6aacc/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8154eaf8899a373e75c21ccd2cf89316033ae49ed89b25271ba54bb81b6aacc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8154eaf8899a373e75c21ccd2cf89316033ae49ed89b25271ba54bb81b6aacc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:00 compute-0 podman[82244]: 2025-11-24 18:20:00.552690996 +0000 UTC m=+0.026681287 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:20:00 compute-0 sudo[82291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:00 compute-0 sudo[82291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:00 compute-0 sudo[82291]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:00 compute-0 podman[82244]: 2025-11-24 18:20:00.673052731 +0000 UTC m=+0.147043002 container init 3e7e41fe54c5b1c7599cad53f7e8788c97b5ed1b66595f7ecc3e2850dce8f889 (image=quay.io/ceph/ceph:v18, name=interesting_buck, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:20:00 compute-0 podman[82244]: 2025-11-24 18:20:00.678906691 +0000 UTC m=+0.152896952 container start 3e7e41fe54c5b1c7599cad53f7e8788c97b5ed1b66595f7ecc3e2850dce8f889 (image=quay.io/ceph/ceph:v18, name=interesting_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:20:00 compute-0 podman[82244]: 2025-11-24 18:20:00.702148029 +0000 UTC m=+0.176138300 container attach 3e7e41fe54c5b1c7599cad53f7e8788c97b5ed1b66595f7ecc3e2850dce8f889 (image=quay.io/ceph/ceph:v18, name=interesting_buck, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 18:20:00 compute-0 ansible-async_wrapper.py[80498]: Done in kid B.
Nov 24 18:20:00 compute-0 sudo[82319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-e5ee928f-099b-569b-93c9-ecf025cbb50d/var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/config/ceph.client.admin.keyring.new /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/config/ceph.client.admin.keyring
Nov 24 18:20:00 compute-0 sudo[82319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:00 compute-0 sudo[82319]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:00 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:20:00 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:00 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:20:00 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:00 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:20:00 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:00 compute-0 ceph-mgr[75218]: [progress INFO root] update: starting ev fdda5bf1-6e1e-476a-b44b-c7c92d3cdd82 (Updating crash deployment (+1 -> 1))
Nov 24 18:20:00 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Nov 24 18:20:00 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 24 18:20:00 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 24 18:20:00 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:20:00 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:20:00 compute-0 ceph-mgr[75218]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Nov 24 18:20:00 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Nov 24 18:20:00 compute-0 sudo[82345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:00 compute-0 sudo[82345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:00 compute-0 sudo[82345]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:00 compute-0 sudo[82370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:20:00 compute-0 sudo[82370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:00 compute-0 sudo[82370]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:00 compute-0 sudo[82395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:00 compute-0 sudo[82395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:00 compute-0 sudo[82395]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:00 compute-0 sudo[82420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d
Nov 24 18:20:00 compute-0 sudo[82420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:01 compute-0 ceph-mon[74927]: Updating compute-0:/var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/config/ceph.client.admin.keyring
Nov 24 18:20:01 compute-0 ceph-mon[74927]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 18:20:01 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:01 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:01 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:01 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 24 18:20:01 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 24 18:20:01 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:20:01 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Nov 24 18:20:01 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3033777510' entity='client.admin' 
Nov 24 18:20:01 compute-0 systemd[1]: libpod-3e7e41fe54c5b1c7599cad53f7e8788c97b5ed1b66595f7ecc3e2850dce8f889.scope: Deactivated successfully.
Nov 24 18:20:01 compute-0 podman[82244]: 2025-11-24 18:20:01.273305513 +0000 UTC m=+0.747295774 container died 3e7e41fe54c5b1c7599cad53f7e8788c97b5ed1b66595f7ecc3e2850dce8f889 (image=quay.io/ceph/ceph:v18, name=interesting_buck, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:20:01 compute-0 podman[82506]: 2025-11-24 18:20:01.300986544 +0000 UTC m=+0.055333753 container create 16b247e5bd012cb1dca1288c039f098d5a525c4f155724ff499bc822fb5e10e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 18:20:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8154eaf8899a373e75c21ccd2cf89316033ae49ed89b25271ba54bb81b6aacc-merged.mount: Deactivated successfully.
Nov 24 18:20:01 compute-0 podman[82244]: 2025-11-24 18:20:01.357937328 +0000 UTC m=+0.831927589 container remove 3e7e41fe54c5b1c7599cad53f7e8788c97b5ed1b66595f7ecc3e2850dce8f889 (image=quay.io/ceph/ceph:v18, name=interesting_buck, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 18:20:01 compute-0 systemd[1]: libpod-conmon-3e7e41fe54c5b1c7599cad53f7e8788c97b5ed1b66595f7ecc3e2850dce8f889.scope: Deactivated successfully.
Nov 24 18:20:01 compute-0 podman[82506]: 2025-11-24 18:20:01.266276022 +0000 UTC m=+0.020623271 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:20:01 compute-0 sudo[82179]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:01 compute-0 systemd[1]: Started libpod-conmon-16b247e5bd012cb1dca1288c039f098d5a525c4f155724ff499bc822fb5e10e7.scope.
Nov 24 18:20:01 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:20:01 compute-0 sudo[82558]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpadtnzgfeqadxkwblnowutrqmmlvcrr ; /usr/bin/python3'
Nov 24 18:20:01 compute-0 sudo[82558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:20:01 compute-0 podman[82506]: 2025-11-24 18:20:01.527323813 +0000 UTC m=+0.281671062 container init 16b247e5bd012cb1dca1288c039f098d5a525c4f155724ff499bc822fb5e10e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_chaplygin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 24 18:20:01 compute-0 podman[82506]: 2025-11-24 18:20:01.533338208 +0000 UTC m=+0.287685417 container start 16b247e5bd012cb1dca1288c039f098d5a525c4f155724ff499bc822fb5e10e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 18:20:01 compute-0 jovial_chaplygin[82533]: 167 167
Nov 24 18:20:01 compute-0 systemd[1]: libpod-16b247e5bd012cb1dca1288c039f098d5a525c4f155724ff499bc822fb5e10e7.scope: Deactivated successfully.
Nov 24 18:20:01 compute-0 podman[82506]: 2025-11-24 18:20:01.538718706 +0000 UTC m=+0.293065915 container attach 16b247e5bd012cb1dca1288c039f098d5a525c4f155724ff499bc822fb5e10e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_chaplygin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 18:20:01 compute-0 podman[82506]: 2025-11-24 18:20:01.540206714 +0000 UTC m=+0.294553923 container died 16b247e5bd012cb1dca1288c039f098d5a525c4f155724ff499bc822fb5e10e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 18:20:01 compute-0 python3[82560]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:20:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-902ef1fb5c1db6898db6e8bff6cf304bd6b07d1ba1e5354c20b7736401be0395-merged.mount: Deactivated successfully.
Nov 24 18:20:01 compute-0 podman[82506]: 2025-11-24 18:20:01.689309008 +0000 UTC m=+0.443656227 container remove 16b247e5bd012cb1dca1288c039f098d5a525c4f155724ff499bc822fb5e10e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_chaplygin, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:20:01 compute-0 systemd[1]: libpod-conmon-16b247e5bd012cb1dca1288c039f098d5a525c4f155724ff499bc822fb5e10e7.scope: Deactivated successfully.
Nov 24 18:20:01 compute-0 podman[82576]: 2025-11-24 18:20:01.775072222 +0000 UTC m=+0.116489155 container create 0b65e2ff6f159bec9c3f0f2ced076d1d8a8a14a8bc7eb4db95e87622d984ff35 (image=quay.io/ceph/ceph:v18, name=peaceful_swanson, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 18:20:01 compute-0 podman[82576]: 2025-11-24 18:20:01.698132644 +0000 UTC m=+0.039549617 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:20:01 compute-0 systemd[1]: Started libpod-conmon-0b65e2ff6f159bec9c3f0f2ced076d1d8a8a14a8bc7eb4db95e87622d984ff35.scope.
Nov 24 18:20:01 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:20:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d6bdaddf4e7eef32e1d93aea15ffce6bd3df2e20cd2c5f7f545f7ac0fa23a85/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d6bdaddf4e7eef32e1d93aea15ffce6bd3df2e20cd2c5f7f545f7ac0fa23a85/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d6bdaddf4e7eef32e1d93aea15ffce6bd3df2e20cd2c5f7f545f7ac0fa23a85/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:01 compute-0 systemd[1]: Reloading.
Nov 24 18:20:01 compute-0 podman[82576]: 2025-11-24 18:20:01.902752205 +0000 UTC m=+0.244169188 container init 0b65e2ff6f159bec9c3f0f2ced076d1d8a8a14a8bc7eb4db95e87622d984ff35 (image=quay.io/ceph/ceph:v18, name=peaceful_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Nov 24 18:20:01 compute-0 podman[82576]: 2025-11-24 18:20:01.912019243 +0000 UTC m=+0.253436176 container start 0b65e2ff6f159bec9c3f0f2ced076d1d8a8a14a8bc7eb4db95e87622d984ff35 (image=quay.io/ceph/ceph:v18, name=peaceful_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Nov 24 18:20:01 compute-0 podman[82576]: 2025-11-24 18:20:01.915856972 +0000 UTC m=+0.257273945 container attach 0b65e2ff6f159bec9c3f0f2ced076d1d8a8a14a8bc7eb4db95e87622d984ff35 (image=quay.io/ceph/ceph:v18, name=peaceful_swanson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 18:20:01 compute-0 systemd-rc-local-generator[82626]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:20:01 compute-0 systemd-sysv-generator[82629]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:20:02 compute-0 ceph-mon[74927]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:02 compute-0 ceph-mon[74927]: Deploying daemon crash.compute-0 on compute-0
Nov 24 18:20:02 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3033777510' entity='client.admin' 
Nov 24 18:20:02 compute-0 systemd[1]: Reloading.
Nov 24 18:20:02 compute-0 systemd-rc-local-generator[82668]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:20:02 compute-0 systemd-sysv-generator[82672]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:20:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:20:02 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:02 compute-0 systemd[1]: Starting Ceph crash.compute-0 for e5ee928f-099b-569b-93c9-ecf025cbb50d...
Nov 24 18:20:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Nov 24 18:20:02 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/605511265' entity='client.admin' 
Nov 24 18:20:02 compute-0 systemd[1]: libpod-0b65e2ff6f159bec9c3f0f2ced076d1d8a8a14a8bc7eb4db95e87622d984ff35.scope: Deactivated successfully.
Nov 24 18:20:02 compute-0 podman[82576]: 2025-11-24 18:20:02.524799066 +0000 UTC m=+0.866216079 container died 0b65e2ff6f159bec9c3f0f2ced076d1d8a8a14a8bc7eb4db95e87622d984ff35 (image=quay.io/ceph/ceph:v18, name=peaceful_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 24 18:20:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d6bdaddf4e7eef32e1d93aea15ffce6bd3df2e20cd2c5f7f545f7ac0fa23a85-merged.mount: Deactivated successfully.
Nov 24 18:20:02 compute-0 podman[82576]: 2025-11-24 18:20:02.584894221 +0000 UTC m=+0.926311154 container remove 0b65e2ff6f159bec9c3f0f2ced076d1d8a8a14a8bc7eb4db95e87622d984ff35 (image=quay.io/ceph/ceph:v18, name=peaceful_swanson, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 18:20:02 compute-0 systemd[1]: libpod-conmon-0b65e2ff6f159bec9c3f0f2ced076d1d8a8a14a8bc7eb4db95e87622d984ff35.scope: Deactivated successfully.
Nov 24 18:20:02 compute-0 sudo[82558]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:02 compute-0 podman[82759]: 2025-11-24 18:20:02.692518908 +0000 UTC m=+0.033434581 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:20:02 compute-0 podman[82759]: 2025-11-24 18:20:02.778147239 +0000 UTC m=+0.119062942 container create cd3250af4db771b6a0133939d88755e021a29e7d1ca9d8eb073c2b3ab97e18ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-crash-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 18:20:02 compute-0 sudo[82795]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsmzceioevtksfzppsquwbvulwtewgke ; /usr/bin/python3'
Nov 24 18:20:02 compute-0 sudo[82795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:20:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1abf9c670d57f1ac4dcd46a55d2b473e74cc2a2c9703cd8e10016e57fe3663fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1abf9c670d57f1ac4dcd46a55d2b473e74cc2a2c9703cd8e10016e57fe3663fe/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1abf9c670d57f1ac4dcd46a55d2b473e74cc2a2c9703cd8e10016e57fe3663fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1abf9c670d57f1ac4dcd46a55d2b473e74cc2a2c9703cd8e10016e57fe3663fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:02 compute-0 podman[82759]: 2025-11-24 18:20:02.900539156 +0000 UTC m=+0.241454859 container init cd3250af4db771b6a0133939d88755e021a29e7d1ca9d8eb073c2b3ab97e18ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-crash-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 18:20:02 compute-0 podman[82759]: 2025-11-24 18:20:02.910218565 +0000 UTC m=+0.251134268 container start cd3250af4db771b6a0133939d88755e021a29e7d1ca9d8eb073c2b3ab97e18ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-crash-compute-0, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 24 18:20:02 compute-0 python3[82801]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:20:03 compute-0 bash[82759]: cd3250af4db771b6a0133939d88755e021a29e7d1ca9d8eb073c2b3ab97e18ec
Nov 24 18:20:03 compute-0 systemd[1]: Started Ceph crash.compute-0 for e5ee928f-099b-569b-93c9-ecf025cbb50d.
Nov 24 18:20:03 compute-0 sudo[82420]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:03 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:20:03 compute-0 podman[82805]: 2025-11-24 18:20:03.087283777 +0000 UTC m=+0.072581807 container create b88b497727ea2a11ce3db1f98d8d9554680e7ce320c559849ac625105175d7bc (image=quay.io/ceph/ceph:v18, name=sleepy_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:20:03 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:03 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:20:03 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:03 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 24 18:20:03 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:03 compute-0 ceph-mgr[75218]: [progress INFO root] complete: finished ev fdda5bf1-6e1e-476a-b44b-c7c92d3cdd82 (Updating crash deployment (+1 -> 1))
Nov 24 18:20:03 compute-0 ceph-mgr[75218]: [progress INFO root] Completed event fdda5bf1-6e1e-476a-b44b-c7c92d3cdd82 (Updating crash deployment (+1 -> 1)) in 2 seconds
Nov 24 18:20:03 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-crash-compute-0[82799]: INFO:ceph-crash:pinging cluster to exercise our key
Nov 24 18:20:03 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 24 18:20:03 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:03 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev ab68b6b2-560b-49f3-9a0c-9ea13e98aaea does not exist
Nov 24 18:20:03 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 24 18:20:03 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:03 compute-0 ceph-mgr[75218]: [progress INFO root] update: starting ev 31377859-8c04-4f77-a6b4-33708e64b87c (Updating mgr deployment (+1 -> 2))
Nov 24 18:20:03 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.uspkow", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 24 18:20:03 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.uspkow", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 24 18:20:03 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.uspkow", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 24 18:20:03 compute-0 systemd[1]: Started libpod-conmon-b88b497727ea2a11ce3db1f98d8d9554680e7ce320c559849ac625105175d7bc.scope.
Nov 24 18:20:03 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 24 18:20:03 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 18:20:03 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:20:03 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:20:03 compute-0 ceph-mgr[75218]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.uspkow on compute-0
Nov 24 18:20:03 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.uspkow on compute-0
Nov 24 18:20:03 compute-0 podman[82805]: 2025-11-24 18:20:03.060345724 +0000 UTC m=+0.045643774 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:20:03 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:20:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eac676a84535fc3db73ffdbaa0b3d92c89afd660b35f7945b050fb6096720cb0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eac676a84535fc3db73ffdbaa0b3d92c89afd660b35f7945b050fb6096720cb0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eac676a84535fc3db73ffdbaa0b3d92c89afd660b35f7945b050fb6096720cb0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:03 compute-0 podman[82805]: 2025-11-24 18:20:03.196370261 +0000 UTC m=+0.181668331 container init b88b497727ea2a11ce3db1f98d8d9554680e7ce320c559849ac625105175d7bc (image=quay.io/ceph/ceph:v18, name=sleepy_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:20:03 compute-0 podman[82805]: 2025-11-24 18:20:03.215122514 +0000 UTC m=+0.200420584 container start b88b497727ea2a11ce3db1f98d8d9554680e7ce320c559849ac625105175d7bc (image=quay.io/ceph/ceph:v18, name=sleepy_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 18:20:03 compute-0 podman[82805]: 2025-11-24 18:20:03.220210054 +0000 UTC m=+0.205508124 container attach b88b497727ea2a11ce3db1f98d8d9554680e7ce320c559849ac625105175d7bc (image=quay.io/ceph/ceph:v18, name=sleepy_chaum, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 24 18:20:03 compute-0 sudo[82824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:03 compute-0 sudo[82824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:03 compute-0 sudo[82824]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:03 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-crash-compute-0[82799]: 2025-11-24T18:20:03.306+0000 7fdc9c488640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 24 18:20:03 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-crash-compute-0[82799]: 2025-11-24T18:20:03.306+0000 7fdc9c488640 -1 AuthRegistry(0x7fdc94067440) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 24 18:20:03 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-crash-compute-0[82799]: 2025-11-24T18:20:03.308+0000 7fdc9c488640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 24 18:20:03 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-crash-compute-0[82799]: 2025-11-24T18:20:03.308+0000 7fdc9c488640 -1 AuthRegistry(0x7fdc9c487000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 24 18:20:03 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-crash-compute-0[82799]: 2025-11-24T18:20:03.311+0000 7fdc9a1fd640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 24 18:20:03 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-crash-compute-0[82799]: 2025-11-24T18:20:03.311+0000 7fdc9c488640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Nov 24 18:20:03 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-crash-compute-0[82799]: [errno 13] RADOS permission denied (error connecting to the cluster)
Nov 24 18:20:03 compute-0 sudo[82851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:20:03 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-crash-compute-0[82799]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Nov 24 18:20:03 compute-0 sudo[82851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:03 compute-0 sudo[82851]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:03 compute-0 sudo[82886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:03 compute-0 sudo[82886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:03 compute-0 sudo[82886]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:03 compute-0 sudo[82911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d
Nov 24 18:20:03 compute-0 sudo[82911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:03 compute-0 ceph-mon[74927]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:03 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/605511265' entity='client.admin' 
Nov 24 18:20:03 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:03 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:03 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:03 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:03 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:03 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.uspkow", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 24 18:20:03 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.uspkow", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 24 18:20:03 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 18:20:03 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:20:03 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Nov 24 18:20:03 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2121276909' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 24 18:20:03 compute-0 podman[82998]: 2025-11-24 18:20:03.927393955 +0000 UTC m=+0.046984119 container create a33bf2823f3ca2f433e325849a8e9ee1cac4b3be8cce5f20e51114f88e7a403f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 18:20:03 compute-0 systemd[1]: Started libpod-conmon-a33bf2823f3ca2f433e325849a8e9ee1cac4b3be8cce5f20e51114f88e7a403f.scope.
Nov 24 18:20:04 compute-0 podman[82998]: 2025-11-24 18:20:03.908783557 +0000 UTC m=+0.028373721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:20:04 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:20:04 compute-0 podman[82998]: 2025-11-24 18:20:04.025163999 +0000 UTC m=+0.144754183 container init a33bf2823f3ca2f433e325849a8e9ee1cac4b3be8cce5f20e51114f88e7a403f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wescoff, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:20:04 compute-0 podman[82998]: 2025-11-24 18:20:04.030997199 +0000 UTC m=+0.150587363 container start a33bf2823f3ca2f433e325849a8e9ee1cac4b3be8cce5f20e51114f88e7a403f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wescoff, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 24 18:20:04 compute-0 podman[82998]: 2025-11-24 18:20:04.034101799 +0000 UTC m=+0.153691963 container attach a33bf2823f3ca2f433e325849a8e9ee1cac4b3be8cce5f20e51114f88e7a403f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:20:04 compute-0 pensive_wescoff[83014]: 167 167
Nov 24 18:20:04 compute-0 systemd[1]: libpod-a33bf2823f3ca2f433e325849a8e9ee1cac4b3be8cce5f20e51114f88e7a403f.scope: Deactivated successfully.
Nov 24 18:20:04 compute-0 conmon[83014]: conmon a33bf2823f3ca2f433e3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a33bf2823f3ca2f433e325849a8e9ee1cac4b3be8cce5f20e51114f88e7a403f.scope/container/memory.events
Nov 24 18:20:04 compute-0 podman[82998]: 2025-11-24 18:20:04.036825929 +0000 UTC m=+0.156416093 container died a33bf2823f3ca2f433e325849a8e9ee1cac4b3be8cce5f20e51114f88e7a403f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wescoff, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 24 18:20:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3b7f460d3cdad070b9f46081e6896fea3e8cb62547658ecc0bd6abce068e226-merged.mount: Deactivated successfully.
Nov 24 18:20:04 compute-0 podman[82998]: 2025-11-24 18:20:04.074939549 +0000 UTC m=+0.194529733 container remove a33bf2823f3ca2f433e325849a8e9ee1cac4b3be8cce5f20e51114f88e7a403f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wescoff, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:20:04 compute-0 systemd[1]: libpod-conmon-a33bf2823f3ca2f433e325849a8e9ee1cac4b3be8cce5f20e51114f88e7a403f.scope: Deactivated successfully.
Nov 24 18:20:04 compute-0 systemd[1]: Reloading.
Nov 24 18:20:04 compute-0 systemd-rc-local-generator[83056]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:20:04 compute-0 systemd-sysv-generator[83062]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:20:04 compute-0 systemd[1]: Reloading.
Nov 24 18:20:04 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:04 compute-0 systemd-rc-local-generator[83100]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:20:04 compute-0 systemd-sysv-generator[83104]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:20:04 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Nov 24 18:20:04 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 18:20:04 compute-0 ceph-mon[74927]: Deploying daemon mgr.compute-0.uspkow on compute-0
Nov 24 18:20:04 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2121276909' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 24 18:20:04 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2121276909' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 24 18:20:04 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Nov 24 18:20:04 compute-0 sleepy_chaum[82822]: set require_min_compat_client to mimic
Nov 24 18:20:04 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Nov 24 18:20:04 compute-0 podman[82805]: 2025-11-24 18:20:04.537837969 +0000 UTC m=+1.523135999 container died b88b497727ea2a11ce3db1f98d8d9554680e7ce320c559849ac625105175d7bc (image=quay.io/ceph/ceph:v18, name=sleepy_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:20:04 compute-0 systemd[1]: libpod-b88b497727ea2a11ce3db1f98d8d9554680e7ce320c559849ac625105175d7bc.scope: Deactivated successfully.
Nov 24 18:20:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-eac676a84535fc3db73ffdbaa0b3d92c89afd660b35f7945b050fb6096720cb0-merged.mount: Deactivated successfully.
Nov 24 18:20:04 compute-0 systemd[1]: Starting Ceph mgr.compute-0.uspkow for e5ee928f-099b-569b-93c9-ecf025cbb50d...
Nov 24 18:20:04 compute-0 podman[82805]: 2025-11-24 18:20:04.664213048 +0000 UTC m=+1.649511078 container remove b88b497727ea2a11ce3db1f98d8d9554680e7ce320c559849ac625105175d7bc (image=quay.io/ceph/ceph:v18, name=sleepy_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 18:20:04 compute-0 ceph-mgr[75218]: [progress INFO root] Writing back 1 completed events
Nov 24 18:20:04 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 24 18:20:04 compute-0 systemd[1]: libpod-conmon-b88b497727ea2a11ce3db1f98d8d9554680e7ce320c559849ac625105175d7bc.scope: Deactivated successfully.
Nov 24 18:20:04 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:20:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:20:04 compute-0 sudo[82795]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:20:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:20:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:20:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:20:04 compute-0 podman[83174]: 2025-11-24 18:20:04.896635643 +0000 UTC m=+0.085751065 container create 10bf68c28a30982a3be559035fe9897b6ce4223fe72231c3d9ff0c7d61c8d80e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-uspkow, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:20:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adffdc61dcd9c365741afc9a86b89a8b8a616a00471ff1fa91fa702567cd6101/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adffdc61dcd9c365741afc9a86b89a8b8a616a00471ff1fa91fa702567cd6101/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adffdc61dcd9c365741afc9a86b89a8b8a616a00471ff1fa91fa702567cd6101/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adffdc61dcd9c365741afc9a86b89a8b8a616a00471ff1fa91fa702567cd6101/merged/var/lib/ceph/mgr/ceph-compute-0.uspkow supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:04 compute-0 podman[83174]: 2025-11-24 18:20:04.949618566 +0000 UTC m=+0.138733958 container init 10bf68c28a30982a3be559035fe9897b6ce4223fe72231c3d9ff0c7d61c8d80e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-uspkow, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:20:04 compute-0 podman[83174]: 2025-11-24 18:20:04.954557483 +0000 UTC m=+0.143672875 container start 10bf68c28a30982a3be559035fe9897b6ce4223fe72231c3d9ff0c7d61c8d80e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-uspkow, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 24 18:20:04 compute-0 bash[83174]: 10bf68c28a30982a3be559035fe9897b6ce4223fe72231c3d9ff0c7d61c8d80e
Nov 24 18:20:04 compute-0 podman[83174]: 2025-11-24 18:20:04.868308865 +0000 UTC m=+0.057424337 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:20:04 compute-0 systemd[1]: Started Ceph mgr.compute-0.uspkow for e5ee928f-099b-569b-93c9-ecf025cbb50d.
Nov 24 18:20:04 compute-0 sudo[82911]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:04 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:20:05 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:05 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:20:05 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:05 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 24 18:20:05 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:05 compute-0 ceph-mgr[75218]: [progress INFO root] complete: finished ev 31377859-8c04-4f77-a6b4-33708e64b87c (Updating mgr deployment (+1 -> 2))
Nov 24 18:20:05 compute-0 ceph-mgr[75218]: [progress INFO root] Completed event 31377859-8c04-4f77-a6b4-33708e64b87c (Updating mgr deployment (+1 -> 2)) in 2 seconds
Nov 24 18:20:05 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 24 18:20:05 compute-0 ceph-mgr[83194]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 18:20:05 compute-0 ceph-mgr[83194]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 24 18:20:05 compute-0 ceph-mgr[83194]: pidfile_write: ignore empty --pid-file
Nov 24 18:20:05 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:05 compute-0 sudo[83218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:05 compute-0 sudo[83218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:05 compute-0 sudo[83218]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:05 compute-0 sudo[83267]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivtyxglzuhfttqoalaxtlfmvarmngkfl ; /usr/bin/python3'
Nov 24 18:20:05 compute-0 sudo[83267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:20:05 compute-0 ceph-mgr[83194]: mgr[py] Loading python module 'alerts'
Nov 24 18:20:05 compute-0 sudo[83268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:20:05 compute-0 sudo[83268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:05 compute-0 sudo[83268]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:05 compute-0 sudo[83295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:05 compute-0 sudo[83295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:05 compute-0 sudo[83295]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:05 compute-0 python3[83274]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:20:05 compute-0 sudo[83320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:20:05 compute-0 sudo[83320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:05 compute-0 sudo[83320]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:05 compute-0 podman[83341]: 2025-11-24 18:20:05.372390605 +0000 UTC m=+0.063837513 container create 28973ca501c3d852d5344a9ac141045d7bd1c8a2d414b96ed4fc3cab849ac449 (image=quay.io/ceph/ceph:v18, name=stupefied_albattani, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:20:05 compute-0 sudo[83356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:05 compute-0 sudo[83356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:05 compute-0 sudo[83356]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:05 compute-0 systemd[1]: Started libpod-conmon-28973ca501c3d852d5344a9ac141045d7bd1c8a2d414b96ed4fc3cab849ac449.scope.
Nov 24 18:20:05 compute-0 podman[83341]: 2025-11-24 18:20:05.341460379 +0000 UTC m=+0.032907387 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:20:05 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:20:05 compute-0 sudo[83386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 24 18:20:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec5ccee1abdeefd6c39068d5d702f801597f906455e3887ee1a7807a2622e095/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec5ccee1abdeefd6c39068d5d702f801597f906455e3887ee1a7807a2622e095/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec5ccee1abdeefd6c39068d5d702f801597f906455e3887ee1a7807a2622e095/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:05 compute-0 sudo[83386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:05 compute-0 podman[83341]: 2025-11-24 18:20:05.468862615 +0000 UTC m=+0.160309563 container init 28973ca501c3d852d5344a9ac141045d7bd1c8a2d414b96ed4fc3cab849ac449 (image=quay.io/ceph/ceph:v18, name=stupefied_albattani, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:20:05 compute-0 podman[83341]: 2025-11-24 18:20:05.475331671 +0000 UTC m=+0.166778589 container start 28973ca501c3d852d5344a9ac141045d7bd1c8a2d414b96ed4fc3cab849ac449 (image=quay.io/ceph/ceph:v18, name=stupefied_albattani, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:20:05 compute-0 podman[83341]: 2025-11-24 18:20:05.478872992 +0000 UTC m=+0.170319990 container attach 28973ca501c3d852d5344a9ac141045d7bd1c8a2d414b96ed4fc3cab849ac449 (image=quay.io/ceph/ceph:v18, name=stupefied_albattani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 24 18:20:05 compute-0 ceph-mgr[83194]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 18:20:05 compute-0 ceph-mgr[83194]: mgr[py] Loading python module 'balancer'
Nov 24 18:20:05 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-uspkow[83190]: 2025-11-24T18:20:05.490+0000 7f0ddaee1140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 18:20:05 compute-0 ceph-mon[74927]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:05 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2121276909' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 24 18:20:05 compute-0 ceph-mon[74927]: osdmap e3: 0 total, 0 up, 0 in
Nov 24 18:20:05 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:05 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:05 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:05 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:05 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:05 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-uspkow[83190]: 2025-11-24T18:20:05.738+0000 7f0ddaee1140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 18:20:05 compute-0 ceph-mgr[83194]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 18:20:05 compute-0 ceph-mgr[83194]: mgr[py] Loading python module 'cephadm'
Nov 24 18:20:05 compute-0 podman[83502]: 2025-11-24 18:20:05.905374416 +0000 UTC m=+0.054721178 container exec 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 24 18:20:06 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:20:06 compute-0 podman[83502]: 2025-11-24 18:20:06.013180518 +0000 UTC m=+0.162527250 container exec_died 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 24 18:20:06 compute-0 sudo[83525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:06 compute-0 sudo[83525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:06 compute-0 sudo[83525]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:06 compute-0 sudo[83568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:20:06 compute-0 sudo[83568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:06 compute-0 sudo[83568]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:06 compute-0 sudo[83604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:06 compute-0 sudo[83604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:06 compute-0 sudo[83604]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:06 compute-0 sudo[83645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Nov 24 18:20:06 compute-0 sudo[83645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:06 compute-0 sudo[83386]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:06 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:20:06 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:06 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:20:06 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:06 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:20:06 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:20:06 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:20:06 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:20:06 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:20:06 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:06 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 6c3f27d5-fab9-41e9-8980-73f288c1fc58 does not exist
Nov 24 18:20:06 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 3bbdf5a1-cdc7-44a6-b413-bb1a62c7deb5 does not exist
Nov 24 18:20:06 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 70f3684e-1e94-4e4c-ba91-7cca2ed96e07 does not exist
Nov 24 18:20:06 compute-0 sudo[83692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:06 compute-0 sudo[83692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:06 compute-0 sudo[83692]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:06 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:06 compute-0 sudo[83733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:20:06 compute-0 sudo[83645]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:06 compute-0 sudo[83733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:06 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 24 18:20:06 compute-0 sudo[83733]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:06 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:06 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 24 18:20:06 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Nov 24 18:20:06 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:06 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 24 18:20:06 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:06 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Nov 24 18:20:06 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:06 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 24 18:20:06 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:06 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Nov 24 18:20:06 compute-0 ceph-mon[74927]: from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:20:06 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:06 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:06 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:20:06 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:20:06 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:06 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:06 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:06 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:06 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:06 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:06 compute-0 ceph-mgr[75218]: [cephadm INFO root] Added host compute-0
Nov 24 18:20:06 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 24 18:20:06 compute-0 ceph-mgr[75218]: [cephadm INFO root] Saving service mon spec with placement compute-0
Nov 24 18:20:06 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Nov 24 18:20:06 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 24 18:20:06 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:06 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Nov 24 18:20:06 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:06 compute-0 ceph-mgr[75218]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Nov 24 18:20:06 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Nov 24 18:20:06 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 24 18:20:06 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:06 compute-0 ceph-mgr[75218]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Nov 24 18:20:06 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Nov 24 18:20:06 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 24 18:20:06 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 24 18:20:06 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 24 18:20:06 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 24 18:20:06 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:06 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:20:06 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:20:06 compute-0 ceph-mgr[75218]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Nov 24 18:20:06 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Nov 24 18:20:06 compute-0 ceph-mgr[75218]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Nov 24 18:20:06 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Nov 24 18:20:06 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Nov 24 18:20:06 compute-0 ceph-mgr[75218]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Nov 24 18:20:06 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Nov 24 18:20:06 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:06 compute-0 stupefied_albattani[83398]: Added host 'compute-0' with addr '192.168.122.100'
Nov 24 18:20:06 compute-0 stupefied_albattani[83398]: Scheduled mon update...
Nov 24 18:20:06 compute-0 stupefied_albattani[83398]: Scheduled mgr update...
Nov 24 18:20:06 compute-0 stupefied_albattani[83398]: Scheduled osd.default_drive_group update...
Nov 24 18:20:06 compute-0 systemd[1]: libpod-28973ca501c3d852d5344a9ac141045d7bd1c8a2d414b96ed4fc3cab849ac449.scope: Deactivated successfully.
Nov 24 18:20:06 compute-0 podman[83341]: 2025-11-24 18:20:06.59517795 +0000 UTC m=+1.286624888 container died 28973ca501c3d852d5344a9ac141045d7bd1c8a2d414b96ed4fc3cab849ac449 (image=quay.io/ceph/ceph:v18, name=stupefied_albattani, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:20:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec5ccee1abdeefd6c39068d5d702f801597f906455e3887ee1a7807a2622e095-merged.mount: Deactivated successfully.
Nov 24 18:20:06 compute-0 sudo[83761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:06 compute-0 sudo[83761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:06 compute-0 sudo[83761]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:06 compute-0 podman[83341]: 2025-11-24 18:20:06.657128413 +0000 UTC m=+1.348575361 container remove 28973ca501c3d852d5344a9ac141045d7bd1c8a2d414b96ed4fc3cab849ac449 (image=quay.io/ceph/ceph:v18, name=stupefied_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:20:06 compute-0 systemd[1]: libpod-conmon-28973ca501c3d852d5344a9ac141045d7bd1c8a2d414b96ed4fc3cab849ac449.scope: Deactivated successfully.
Nov 24 18:20:06 compute-0 sudo[83267]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:06 compute-0 sudo[83797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:20:06 compute-0 sudo[83797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:06 compute-0 sudo[83797]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:06 compute-0 sudo[83822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:06 compute-0 sudo[83822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:06 compute-0 sudo[83822]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:06 compute-0 sudo[83847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d
Nov 24 18:20:06 compute-0 sudo[83847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:06 compute-0 sudo[83907]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slgtlkuyottzbtjgltdsbylqolglhnlo ; /usr/bin/python3'
Nov 24 18:20:06 compute-0 sudo[83907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:20:07 compute-0 podman[83914]: 2025-11-24 18:20:07.107379558 +0000 UTC m=+0.063669477 container create e1b169892b8ee50a9b1680d9b5c95ba61bd3151537b681f901c9d6792b6bd5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 18:20:07 compute-0 systemd[1]: Started libpod-conmon-e1b169892b8ee50a9b1680d9b5c95ba61bd3151537b681f901c9d6792b6bd5e3.scope.
Nov 24 18:20:07 compute-0 python3[83911]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:20:07 compute-0 podman[83914]: 2025-11-24 18:20:07.075800477 +0000 UTC m=+0.032090396 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:20:07 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:20:07 compute-0 podman[83914]: 2025-11-24 18:20:07.205532642 +0000 UTC m=+0.161822541 container init e1b169892b8ee50a9b1680d9b5c95ba61bd3151537b681f901c9d6792b6bd5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jang, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:20:07 compute-0 podman[83945]: 2025-11-24 18:20:07.209486194 +0000 UTC m=+0.038383748 container create f3f2e6d246ac757bd0ae46b321365769c2729cf17ef9bc3d464a506e54ea20c6 (image=quay.io/ceph/ceph:v18, name=blissful_ardinghelli, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:20:07 compute-0 podman[83914]: 2025-11-24 18:20:07.212639855 +0000 UTC m=+0.168929744 container start e1b169892b8ee50a9b1680d9b5c95ba61bd3151537b681f901c9d6792b6bd5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jang, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:20:07 compute-0 hardcore_jang[83942]: 167 167
Nov 24 18:20:07 compute-0 systemd[1]: libpod-e1b169892b8ee50a9b1680d9b5c95ba61bd3151537b681f901c9d6792b6bd5e3.scope: Deactivated successfully.
Nov 24 18:20:07 compute-0 podman[83914]: 2025-11-24 18:20:07.222119258 +0000 UTC m=+0.178409137 container attach e1b169892b8ee50a9b1680d9b5c95ba61bd3151537b681f901c9d6792b6bd5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:20:07 compute-0 podman[83914]: 2025-11-24 18:20:07.222529389 +0000 UTC m=+0.178819268 container died e1b169892b8ee50a9b1680d9b5c95ba61bd3151537b681f901c9d6792b6bd5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jang, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 18:20:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad0caab53ef7ec768ca0c8aa68d4b9de2b14dce76fb7e6e8e62062a7c3834a58-merged.mount: Deactivated successfully.
Nov 24 18:20:07 compute-0 podman[83914]: 2025-11-24 18:20:07.262751243 +0000 UTC m=+0.219041122 container remove e1b169892b8ee50a9b1680d9b5c95ba61bd3151537b681f901c9d6792b6bd5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jang, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:20:07 compute-0 systemd[1]: Started libpod-conmon-f3f2e6d246ac757bd0ae46b321365769c2729cf17ef9bc3d464a506e54ea20c6.scope.
Nov 24 18:20:07 compute-0 systemd[1]: libpod-conmon-e1b169892b8ee50a9b1680d9b5c95ba61bd3151537b681f901c9d6792b6bd5e3.scope: Deactivated successfully.
Nov 24 18:20:07 compute-0 podman[83945]: 2025-11-24 18:20:07.191315286 +0000 UTC m=+0.020212860 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:20:07 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:20:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b4d0e03d0f696741878194ba23784f8e395d94fb89243ca5250e42918e7a343/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b4d0e03d0f696741878194ba23784f8e395d94fb89243ca5250e42918e7a343/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b4d0e03d0f696741878194ba23784f8e395d94fb89243ca5250e42918e7a343/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:07 compute-0 sudo[83847]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:07 compute-0 podman[83945]: 2025-11-24 18:20:07.316580767 +0000 UTC m=+0.145478371 container init f3f2e6d246ac757bd0ae46b321365769c2729cf17ef9bc3d464a506e54ea20c6 (image=quay.io/ceph/ceph:v18, name=blissful_ardinghelli, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 18:20:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:20:07 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:20:07 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:07 compute-0 podman[83945]: 2025-11-24 18:20:07.330442753 +0000 UTC m=+0.159340317 container start f3f2e6d246ac757bd0ae46b321365769c2729cf17ef9bc3d464a506e54ea20c6 (image=quay.io/ceph/ceph:v18, name=blissful_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:20:07 compute-0 ceph-mgr[75218]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.dfqptp (unknown last config time)...
Nov 24 18:20:07 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.dfqptp (unknown last config time)...
Nov 24 18:20:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.dfqptp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 24 18:20:07 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.dfqptp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 24 18:20:07 compute-0 podman[83945]: 2025-11-24 18:20:07.333317747 +0000 UTC m=+0.162215301 container attach f3f2e6d246ac757bd0ae46b321365769c2729cf17ef9bc3d464a506e54ea20c6 (image=quay.io/ceph/ceph:v18, name=blissful_ardinghelli, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 24 18:20:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 24 18:20:07 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 18:20:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:20:07 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:20:07 compute-0 ceph-mgr[75218]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.dfqptp on compute-0
Nov 24 18:20:07 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.dfqptp on compute-0
Nov 24 18:20:07 compute-0 sudo[83982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:07 compute-0 sudo[83982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:07 compute-0 sudo[83982]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:07 compute-0 sudo[84007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:20:07 compute-0 sudo[84007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:07 compute-0 sudo[84007]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:20:07 compute-0 sudo[84032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:07 compute-0 sudo[84032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:07 compute-0 sudo[84032]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:07 compute-0 ceph-mon[74927]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:07 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:07 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:07 compute-0 ceph-mon[74927]: Added host compute-0
Nov 24 18:20:07 compute-0 ceph-mon[74927]: Saving service mon spec with placement compute-0
Nov 24 18:20:07 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:07 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:07 compute-0 ceph-mon[74927]: Saving service mgr spec with placement compute-0
Nov 24 18:20:07 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:07 compute-0 ceph-mon[74927]: Reconfiguring mon.compute-0 (unknown last config time)...
Nov 24 18:20:07 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 24 18:20:07 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 24 18:20:07 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:07 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:20:07 compute-0 ceph-mon[74927]: Marking host: compute-0 for OSDSpec preview refresh.
Nov 24 18:20:07 compute-0 ceph-mon[74927]: Saving service osd.default_drive_group spec with placement compute-0
Nov 24 18:20:07 compute-0 ceph-mon[74927]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 24 18:20:07 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:07 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:07 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:07 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.dfqptp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 24 18:20:07 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 18:20:07 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:20:07 compute-0 sudo[84057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d
Nov 24 18:20:07 compute-0 sudo[84057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:07 compute-0 ceph-mgr[83194]: mgr[py] Loading python module 'crash'
Nov 24 18:20:07 compute-0 podman[84117]: 2025-11-24 18:20:07.790337117 +0000 UTC m=+0.035967656 container create 7959061500b934909e80c0781419222df4b5cf0a1925293d28e82f4bb0924d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:20:07 compute-0 systemd[1]: Started libpod-conmon-7959061500b934909e80c0781419222df4b5cf0a1925293d28e82f4bb0924d41.scope.
Nov 24 18:20:07 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:20:07 compute-0 podman[84117]: 2025-11-24 18:20:07.85968782 +0000 UTC m=+0.105318359 container init 7959061500b934909e80c0781419222df4b5cf0a1925293d28e82f4bb0924d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_proskuriakova, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:20:07 compute-0 podman[84117]: 2025-11-24 18:20:07.86514509 +0000 UTC m=+0.110775659 container start 7959061500b934909e80c0781419222df4b5cf0a1925293d28e82f4bb0924d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:20:07 compute-0 laughing_proskuriakova[84133]: 167 167
Nov 24 18:20:07 compute-0 systemd[1]: libpod-7959061500b934909e80c0781419222df4b5cf0a1925293d28e82f4bb0924d41.scope: Deactivated successfully.
Nov 24 18:20:07 compute-0 conmon[84133]: conmon 7959061500b934909e80 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7959061500b934909e80c0781419222df4b5cf0a1925293d28e82f4bb0924d41.scope/container/memory.events
Nov 24 18:20:07 compute-0 podman[84117]: 2025-11-24 18:20:07.869156223 +0000 UTC m=+0.114786762 container attach 7959061500b934909e80c0781419222df4b5cf0a1925293d28e82f4bb0924d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_proskuriakova, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:20:07 compute-0 podman[84117]: 2025-11-24 18:20:07.870132398 +0000 UTC m=+0.115762927 container died 7959061500b934909e80c0781419222df4b5cf0a1925293d28e82f4bb0924d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_proskuriakova, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 24 18:20:07 compute-0 podman[84117]: 2025-11-24 18:20:07.774169101 +0000 UTC m=+0.019799680 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:20:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf0745fc83558bf80f00c6ad14a6a5f90760cd19f33e933f4ea8597e0c99878e-merged.mount: Deactivated successfully.
Nov 24 18:20:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 24 18:20:07 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/748237275' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 24 18:20:07 compute-0 blissful_ardinghelli[83977]: 
Nov 24 18:20:07 compute-0 blissful_ardinghelli[83977]: {"fsid":"e5ee928f-099b-569b-93c9-ecf025cbb50d","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":80,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-11-24T18:18:44.978620+0000","services":{}},"progress_events":{}}
Nov 24 18:20:07 compute-0 systemd[1]: libpod-f3f2e6d246ac757bd0ae46b321365769c2729cf17ef9bc3d464a506e54ea20c6.scope: Deactivated successfully.
Nov 24 18:20:07 compute-0 ceph-mgr[83194]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 18:20:07 compute-0 ceph-mgr[83194]: mgr[py] Loading python module 'dashboard'
Nov 24 18:20:07 compute-0 conmon[83977]: conmon f3f2e6d246ac757bd0ae <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f3f2e6d246ac757bd0ae46b321365769c2729cf17ef9bc3d464a506e54ea20c6.scope/container/memory.events
Nov 24 18:20:07 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-uspkow[83190]: 2025-11-24T18:20:07.940+0000 7f0ddaee1140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 18:20:07 compute-0 podman[84117]: 2025-11-24 18:20:07.951551671 +0000 UTC m=+0.197182220 container remove 7959061500b934909e80c0781419222df4b5cf0a1925293d28e82f4bb0924d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_proskuriakova, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 18:20:07 compute-0 podman[83945]: 2025-11-24 18:20:07.961156088 +0000 UTC m=+0.790053652 container died f3f2e6d246ac757bd0ae46b321365769c2729cf17ef9bc3d464a506e54ea20c6 (image=quay.io/ceph/ceph:v18, name=blissful_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 24 18:20:07 compute-0 systemd[1]: libpod-conmon-7959061500b934909e80c0781419222df4b5cf0a1925293d28e82f4bb0924d41.scope: Deactivated successfully.
Nov 24 18:20:07 compute-0 sudo[84057]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b4d0e03d0f696741878194ba23784f8e395d94fb89243ca5250e42918e7a343-merged.mount: Deactivated successfully.
Nov 24 18:20:08 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:20:08 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:08 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:20:08 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:08 compute-0 podman[83945]: 2025-11-24 18:20:08.040429036 +0000 UTC m=+0.869326600 container remove f3f2e6d246ac757bd0ae46b321365769c2729cf17ef9bc3d464a506e54ea20c6 (image=quay.io/ceph/ceph:v18, name=blissful_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:20:08 compute-0 systemd[1]: libpod-conmon-f3f2e6d246ac757bd0ae46b321365769c2729cf17ef9bc3d464a506e54ea20c6.scope: Deactivated successfully.
Nov 24 18:20:08 compute-0 sudo[83907]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:08 compute-0 sudo[84164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:08 compute-0 sudo[84164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:08 compute-0 sudo[84164]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:08 compute-0 sudo[84189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:20:08 compute-0 sudo[84189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:08 compute-0 sudo[84189]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:08 compute-0 sudo[84214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:08 compute-0 sudo[84214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:08 compute-0 sudo[84214]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:08 compute-0 sudo[84239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 24 18:20:08 compute-0 sudo[84239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:08 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:08 compute-0 ceph-mon[74927]: Reconfiguring mgr.compute-0.dfqptp (unknown last config time)...
Nov 24 18:20:08 compute-0 ceph-mon[74927]: Reconfiguring daemon mgr.compute-0.dfqptp on compute-0
Nov 24 18:20:08 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/748237275' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 24 18:20:08 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:08 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:09 compute-0 podman[84337]: 2025-11-24 18:20:09.011783499 +0000 UTC m=+0.087977323 container exec 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:20:09 compute-0 podman[84337]: 2025-11-24 18:20:09.146534213 +0000 UTC m=+0.222727997 container exec_died 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 24 18:20:09 compute-0 ceph-mgr[83194]: mgr[py] Loading python module 'devicehealth'
Nov 24 18:20:09 compute-0 sudo[84239]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:20:09 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:20:09 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:20:09 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:20:09 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:20:09 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:20:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:20:09 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:20:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:20:09 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:09 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 65dbbbac-f8bf-4631-b7da-eeb20d52b674 does not exist
Nov 24 18:20:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 24 18:20:09 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:09 compute-0 ceph-mgr[75218]: [progress INFO root] update: starting ev 2556767f-a180-4740-8216-e5585c25e697 (Updating mgr deployment (-1 -> 1))
Nov 24 18:20:09 compute-0 ceph-mgr[75218]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.uspkow from compute-0 -- ports [8765]
Nov 24 18:20:09 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.uspkow from compute-0 -- ports [8765]
Nov 24 18:20:09 compute-0 ceph-mgr[83194]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 18:20:09 compute-0 ceph-mgr[83194]: mgr[py] Loading python module 'diskprediction_local'
Nov 24 18:20:09 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-uspkow[83190]: 2025-11-24T18:20:09.533+0000 7f0ddaee1140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 18:20:09 compute-0 sudo[84423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:09 compute-0 sudo[84423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:09 compute-0 sudo[84423]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:09 compute-0 ceph-mgr[75218]: [progress INFO root] Writing back 2 completed events
Nov 24 18:20:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 24 18:20:09 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:09 compute-0 sudo[84448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:20:09 compute-0 sudo[84448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:09 compute-0 sudo[84448]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:09 compute-0 sudo[84473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:09 compute-0 sudo[84473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:09 compute-0 sudo[84473]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:09 compute-0 sudo[84498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 rm-daemon --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --name mgr.compute-0.uspkow --force --tcp-ports 8765
Nov 24 18:20:09 compute-0 sudo[84498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:10 compute-0 ceph-mon[74927]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:10 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:10 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:10 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:10 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:10 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:20:10 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:20:10 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:10 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:10 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:10 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-uspkow[83190]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 24 18:20:10 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-uspkow[83190]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 24 18:20:10 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-uspkow[83190]:   from numpy import show_config as show_numpy_config
Nov 24 18:20:10 compute-0 ceph-mgr[83194]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 18:20:10 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-uspkow[83190]: 2025-11-24T18:20:10.046+0000 7f0ddaee1140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 18:20:10 compute-0 ceph-mgr[83194]: mgr[py] Loading python module 'influx'
Nov 24 18:20:10 compute-0 systemd[1]: Stopping Ceph mgr.compute-0.uspkow for e5ee928f-099b-569b-93c9-ecf025cbb50d...
Nov 24 18:20:10 compute-0 ceph-mgr[83194]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 18:20:10 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-uspkow[83190]: 2025-11-24T18:20:10.273+0000 7f0ddaee1140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 18:20:10 compute-0 ceph-mgr[83194]: mgr[py] Loading python module 'insights'
Nov 24 18:20:10 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:10 compute-0 podman[84591]: 2025-11-24 18:20:10.462705469 +0000 UTC m=+0.102403063 container died 10bf68c28a30982a3be559035fe9897b6ce4223fe72231c3d9ff0c7d61c8d80e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-uspkow, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:20:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-adffdc61dcd9c365741afc9a86b89a8b8a616a00471ff1fa91fa702567cd6101-merged.mount: Deactivated successfully.
Nov 24 18:20:10 compute-0 podman[84591]: 2025-11-24 18:20:10.550809954 +0000 UTC m=+0.190507518 container remove 10bf68c28a30982a3be559035fe9897b6ce4223fe72231c3d9ff0c7d61c8d80e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-uspkow, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 24 18:20:10 compute-0 bash[84591]: ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-uspkow
Nov 24 18:20:10 compute-0 systemd[1]: ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d@mgr.compute-0.uspkow.service: Main process exited, code=exited, status=143/n/a
Nov 24 18:20:10 compute-0 systemd[1]: ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d@mgr.compute-0.uspkow.service: Failed with result 'exit-code'.
Nov 24 18:20:10 compute-0 systemd[1]: Stopped Ceph mgr.compute-0.uspkow for e5ee928f-099b-569b-93c9-ecf025cbb50d.
Nov 24 18:20:10 compute-0 systemd[1]: ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d@mgr.compute-0.uspkow.service: Consumed 6.354s CPU time.
Nov 24 18:20:10 compute-0 systemd[1]: Reloading.
Nov 24 18:20:10 compute-0 systemd-rc-local-generator[84681]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:20:10 compute-0 systemd-sysv-generator[84685]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:20:11 compute-0 ceph-mon[74927]: Removing daemon mgr.compute-0.uspkow from compute-0 -- ports [8765]
Nov 24 18:20:11 compute-0 sudo[84498]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:11 compute-0 ceph-mgr[75218]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.uspkow
Nov 24 18:20:11 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.uspkow
Nov 24 18:20:11 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.uspkow"} v 0) v1
Nov 24 18:20:11 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.uspkow"}]: dispatch
Nov 24 18:20:11 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.uspkow"}]': finished
Nov 24 18:20:11 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 24 18:20:11 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:11 compute-0 ceph-mgr[75218]: [progress INFO root] complete: finished ev 2556767f-a180-4740-8216-e5585c25e697 (Updating mgr deployment (-1 -> 1))
Nov 24 18:20:11 compute-0 ceph-mgr[75218]: [progress INFO root] Completed event 2556767f-a180-4740-8216-e5585c25e697 (Updating mgr deployment (-1 -> 1)) in 2 seconds
Nov 24 18:20:11 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 24 18:20:11 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:11 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 94bd629f-1ccd-46a3-a3d1-48a51d23cf59 does not exist
Nov 24 18:20:11 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 18:20:11 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:20:11 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 18:20:11 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:20:11 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:20:11 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:20:11 compute-0 sudo[84691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:11 compute-0 sudo[84691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:11 compute-0 sudo[84691]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:11 compute-0 sudo[84716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:20:11 compute-0 sudo[84716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:11 compute-0 sudo[84716]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:11 compute-0 sudo[84741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:11 compute-0 sudo[84741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:11 compute-0 sudo[84741]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:11 compute-0 sudo[84766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 18:20:11 compute-0 sudo[84766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:11 compute-0 podman[84834]: 2025-11-24 18:20:11.84201669 +0000 UTC m=+0.041357594 container create bcf01150f7222120cf026f892da6eefa8b2594d1ebde5b33945c5b3abc9a32be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cohen, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 18:20:11 compute-0 systemd[1]: Started libpod-conmon-bcf01150f7222120cf026f892da6eefa8b2594d1ebde5b33945c5b3abc9a32be.scope.
Nov 24 18:20:11 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:20:11 compute-0 podman[84834]: 2025-11-24 18:20:11.825221338 +0000 UTC m=+0.024562272 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:20:11 compute-0 podman[84834]: 2025-11-24 18:20:11.934767795 +0000 UTC m=+0.134108699 container init bcf01150f7222120cf026f892da6eefa8b2594d1ebde5b33945c5b3abc9a32be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:20:11 compute-0 podman[84834]: 2025-11-24 18:20:11.943568001 +0000 UTC m=+0.142908955 container start bcf01150f7222120cf026f892da6eefa8b2594d1ebde5b33945c5b3abc9a32be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:20:11 compute-0 podman[84834]: 2025-11-24 18:20:11.947693577 +0000 UTC m=+0.147034521 container attach bcf01150f7222120cf026f892da6eefa8b2594d1ebde5b33945c5b3abc9a32be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:20:11 compute-0 epic_cohen[84850]: 167 167
Nov 24 18:20:11 compute-0 systemd[1]: libpod-bcf01150f7222120cf026f892da6eefa8b2594d1ebde5b33945c5b3abc9a32be.scope: Deactivated successfully.
Nov 24 18:20:11 compute-0 conmon[84850]: conmon bcf01150f7222120cf02 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bcf01150f7222120cf026f892da6eefa8b2594d1ebde5b33945c5b3abc9a32be.scope/container/memory.events
Nov 24 18:20:11 compute-0 podman[84834]: 2025-11-24 18:20:11.953927367 +0000 UTC m=+0.153268271 container died bcf01150f7222120cf026f892da6eefa8b2594d1ebde5b33945c5b3abc9a32be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cohen, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 24 18:20:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc8618a5aba2d8a6830d5c9f432eac3ac5cef25a4e7df0e8c7c5ec17f62960f3-merged.mount: Deactivated successfully.
Nov 24 18:20:11 compute-0 podman[84834]: 2025-11-24 18:20:11.994257954 +0000 UTC m=+0.193598868 container remove bcf01150f7222120cf026f892da6eefa8b2594d1ebde5b33945c5b3abc9a32be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cohen, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 24 18:20:12 compute-0 systemd[1]: libpod-conmon-bcf01150f7222120cf026f892da6eefa8b2594d1ebde5b33945c5b3abc9a32be.scope: Deactivated successfully.
Nov 24 18:20:12 compute-0 ceph-mon[74927]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:12 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.uspkow"}]: dispatch
Nov 24 18:20:12 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.uspkow"}]': finished
Nov 24 18:20:12 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:12 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:12 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:20:12 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:20:12 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:20:12 compute-0 podman[84872]: 2025-11-24 18:20:12.178277825 +0000 UTC m=+0.041881578 container create 88b18c17768c2c5db871dc67d5a25fa2fd9e8709905d1dcbe5207f2cf6826ed3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:20:12 compute-0 systemd[1]: Started libpod-conmon-88b18c17768c2c5db871dc67d5a25fa2fd9e8709905d1dcbe5207f2cf6826ed3.scope.
Nov 24 18:20:12 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:20:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77de3137e9cee6f69567c41c423164b2b9265e090a7ea7ca06f9540e2561a1ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77de3137e9cee6f69567c41c423164b2b9265e090a7ea7ca06f9540e2561a1ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77de3137e9cee6f69567c41c423164b2b9265e090a7ea7ca06f9540e2561a1ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77de3137e9cee6f69567c41c423164b2b9265e090a7ea7ca06f9540e2561a1ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77de3137e9cee6f69567c41c423164b2b9265e090a7ea7ca06f9540e2561a1ec/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:12 compute-0 podman[84872]: 2025-11-24 18:20:12.161415921 +0000 UTC m=+0.025019684 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:20:12 compute-0 podman[84872]: 2025-11-24 18:20:12.256976388 +0000 UTC m=+0.120580151 container init 88b18c17768c2c5db871dc67d5a25fa2fd9e8709905d1dcbe5207f2cf6826ed3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_elion, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 18:20:12 compute-0 podman[84872]: 2025-11-24 18:20:12.264978494 +0000 UTC m=+0.128582267 container start 88b18c17768c2c5db871dc67d5a25fa2fd9e8709905d1dcbe5207f2cf6826ed3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_elion, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:20:12 compute-0 podman[84872]: 2025-11-24 18:20:12.268754661 +0000 UTC m=+0.132358424 container attach 88b18c17768c2c5db871dc67d5a25fa2fd9e8709905d1dcbe5207f2cf6826ed3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_elion, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 18:20:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:20:12 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:13 compute-0 ceph-mon[74927]: Removing key for mgr.compute-0.uspkow
Nov 24 18:20:13 compute-0 competent_elion[84888]: --> passed data devices: 0 physical, 3 LVM
Nov 24 18:20:13 compute-0 competent_elion[84888]: --> relative data size: 1.0
Nov 24 18:20:13 compute-0 competent_elion[84888]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 18:20:13 compute-0 competent_elion[84888]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 1f8f8fab-5f72-4f8f-b22f-80baf46bd30b
Nov 24 18:20:13 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b"} v 0) v1
Nov 24 18:20:13 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/689176563' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b"}]: dispatch
Nov 24 18:20:13 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Nov 24 18:20:13 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 18:20:13 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/689176563' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b"}]': finished
Nov 24 18:20:13 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Nov 24 18:20:13 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Nov 24 18:20:13 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 18:20:13 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 18:20:13 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 18:20:13 compute-0 competent_elion[84888]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 18:20:13 compute-0 lvm[84949]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 18:20:13 compute-0 lvm[84949]: VG ceph_vg0 finished
Nov 24 18:20:13 compute-0 competent_elion[84888]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Nov 24 18:20:13 compute-0 competent_elion[84888]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Nov 24 18:20:13 compute-0 competent_elion[84888]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 24 18:20:13 compute-0 competent_elion[84888]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 24 18:20:13 compute-0 competent_elion[84888]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Nov 24 18:20:14 compute-0 ceph-mon[74927]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:14 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/689176563' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b"}]: dispatch
Nov 24 18:20:14 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/689176563' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b"}]': finished
Nov 24 18:20:14 compute-0 ceph-mon[74927]: osdmap e4: 1 total, 0 up, 1 in
Nov 24 18:20:14 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 18:20:14 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 24 18:20:14 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2593176764' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 24 18:20:14 compute-0 competent_elion[84888]:  stderr: got monmap epoch 1
Nov 24 18:20:14 compute-0 competent_elion[84888]: --> Creating keyring file for osd.0
Nov 24 18:20:14 compute-0 competent_elion[84888]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Nov 24 18:20:14 compute-0 competent_elion[84888]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Nov 24 18:20:14 compute-0 competent_elion[84888]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 1f8f8fab-5f72-4f8f-b22f-80baf46bd30b --setuser ceph --setgroup ceph
Nov 24 18:20:14 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:14 compute-0 ceph-mgr[75218]: [progress INFO root] Writing back 3 completed events
Nov 24 18:20:14 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 24 18:20:14 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:15 compute-0 ceph-mon[74927]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 24 18:20:15 compute-0 ceph-mon[74927]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 24 18:20:15 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2593176764' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 24 18:20:15 compute-0 ceph-mon[74927]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:15 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:16 compute-0 ceph-mon[74927]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 24 18:20:16 compute-0 ceph-mon[74927]: Cluster is now healthy
Nov 24 18:20:16 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:17 compute-0 ceph-mon[74927]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:20:17 compute-0 competent_elion[84888]:  stderr: 2025-11-24T18:20:14.434+0000 7f4e60011740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 24 18:20:17 compute-0 competent_elion[84888]:  stderr: 2025-11-24T18:20:14.434+0000 7f4e60011740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 24 18:20:17 compute-0 competent_elion[84888]:  stderr: 2025-11-24T18:20:14.434+0000 7f4e60011740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 24 18:20:17 compute-0 competent_elion[84888]:  stderr: 2025-11-24T18:20:14.434+0000 7f4e60011740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Nov 24 18:20:17 compute-0 competent_elion[84888]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Nov 24 18:20:17 compute-0 competent_elion[84888]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 24 18:20:17 compute-0 competent_elion[84888]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Nov 24 18:20:17 compute-0 competent_elion[84888]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 24 18:20:17 compute-0 competent_elion[84888]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Nov 24 18:20:17 compute-0 competent_elion[84888]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 24 18:20:17 compute-0 competent_elion[84888]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 24 18:20:17 compute-0 competent_elion[84888]: --> ceph-volume lvm activate successful for osd ID: 0
Nov 24 18:20:17 compute-0 competent_elion[84888]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Nov 24 18:20:17 compute-0 competent_elion[84888]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 18:20:17 compute-0 competent_elion[84888]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 79b9678c-793a-417c-9179-1829e79d1a19
Nov 24 18:20:18 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "79b9678c-793a-417c-9179-1829e79d1a19"} v 0) v1
Nov 24 18:20:18 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2438235050' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "79b9678c-793a-417c-9179-1829e79d1a19"}]: dispatch
Nov 24 18:20:18 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Nov 24 18:20:18 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 18:20:18 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2438235050' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "79b9678c-793a-417c-9179-1829e79d1a19"}]': finished
Nov 24 18:20:18 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Nov 24 18:20:18 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Nov 24 18:20:18 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 18:20:18 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 18:20:18 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 18:20:18 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 18:20:18 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 18:20:18 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 18:20:18 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2438235050' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "79b9678c-793a-417c-9179-1829e79d1a19"}]: dispatch
Nov 24 18:20:18 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2438235050' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "79b9678c-793a-417c-9179-1829e79d1a19"}]': finished
Nov 24 18:20:18 compute-0 ceph-mon[74927]: osdmap e5: 2 total, 0 up, 2 in
Nov 24 18:20:18 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 18:20:18 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 18:20:18 compute-0 lvm[85883]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 24 18:20:18 compute-0 lvm[85883]: VG ceph_vg1 finished
Nov 24 18:20:18 compute-0 competent_elion[84888]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 18:20:18 compute-0 competent_elion[84888]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Nov 24 18:20:18 compute-0 competent_elion[84888]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Nov 24 18:20:18 compute-0 competent_elion[84888]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 24 18:20:18 compute-0 competent_elion[84888]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 24 18:20:18 compute-0 competent_elion[84888]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Nov 24 18:20:18 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:18 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 24 18:20:18 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/460886543' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 24 18:20:18 compute-0 competent_elion[84888]:  stderr: got monmap epoch 1
Nov 24 18:20:18 compute-0 competent_elion[84888]: --> Creating keyring file for osd.1
Nov 24 18:20:18 compute-0 competent_elion[84888]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Nov 24 18:20:18 compute-0 competent_elion[84888]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Nov 24 18:20:18 compute-0 competent_elion[84888]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 79b9678c-793a-417c-9179-1829e79d1a19 --setuser ceph --setgroup ceph
Nov 24 18:20:19 compute-0 ceph-mon[74927]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:19 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/460886543' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 24 18:20:20 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:21 compute-0 competent_elion[84888]:  stderr: 2025-11-24T18:20:18.842+0000 7fac60dc6740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 24 18:20:21 compute-0 competent_elion[84888]:  stderr: 2025-11-24T18:20:18.842+0000 7fac60dc6740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 24 18:20:21 compute-0 competent_elion[84888]:  stderr: 2025-11-24T18:20:18.842+0000 7fac60dc6740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 24 18:20:21 compute-0 competent_elion[84888]:  stderr: 2025-11-24T18:20:18.842+0000 7fac60dc6740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Nov 24 18:20:21 compute-0 competent_elion[84888]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Nov 24 18:20:21 compute-0 competent_elion[84888]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 24 18:20:21 compute-0 competent_elion[84888]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Nov 24 18:20:21 compute-0 competent_elion[84888]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 24 18:20:21 compute-0 competent_elion[84888]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Nov 24 18:20:21 compute-0 competent_elion[84888]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 24 18:20:21 compute-0 competent_elion[84888]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 24 18:20:21 compute-0 competent_elion[84888]: --> ceph-volume lvm activate successful for osd ID: 1
Nov 24 18:20:21 compute-0 competent_elion[84888]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Nov 24 18:20:21 compute-0 competent_elion[84888]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 18:20:21 compute-0 ceph-mon[74927]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:21 compute-0 competent_elion[84888]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new d6904eab-3369-4532-8b99-18f2965a8556
Nov 24 18:20:21 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "d6904eab-3369-4532-8b99-18f2965a8556"} v 0) v1
Nov 24 18:20:21 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3989597058' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d6904eab-3369-4532-8b99-18f2965a8556"}]: dispatch
Nov 24 18:20:21 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Nov 24 18:20:21 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 18:20:21 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3989597058' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d6904eab-3369-4532-8b99-18f2965a8556"}]': finished
Nov 24 18:20:21 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Nov 24 18:20:21 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Nov 24 18:20:21 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 18:20:21 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 18:20:21 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 18:20:21 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 18:20:21 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 18:20:21 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 18:20:21 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 18:20:21 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 18:20:21 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 18:20:22 compute-0 lvm[86821]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 24 18:20:22 compute-0 lvm[86821]: VG ceph_vg2 finished
Nov 24 18:20:22 compute-0 competent_elion[84888]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 18:20:22 compute-0 competent_elion[84888]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Nov 24 18:20:22 compute-0 competent_elion[84888]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Nov 24 18:20:22 compute-0 competent_elion[84888]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 24 18:20:22 compute-0 competent_elion[84888]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 24 18:20:22 compute-0 competent_elion[84888]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Nov 24 18:20:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:20:22 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:22 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3989597058' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d6904eab-3369-4532-8b99-18f2965a8556"}]: dispatch
Nov 24 18:20:22 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3989597058' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d6904eab-3369-4532-8b99-18f2965a8556"}]': finished
Nov 24 18:20:22 compute-0 ceph-mon[74927]: osdmap e6: 3 total, 0 up, 3 in
Nov 24 18:20:22 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 18:20:22 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 18:20:22 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 18:20:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 24 18:20:22 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3389139640' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 24 18:20:22 compute-0 competent_elion[84888]:  stderr: got monmap epoch 1
Nov 24 18:20:22 compute-0 competent_elion[84888]: --> Creating keyring file for osd.2
Nov 24 18:20:22 compute-0 competent_elion[84888]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Nov 24 18:20:22 compute-0 competent_elion[84888]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Nov 24 18:20:22 compute-0 competent_elion[84888]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid d6904eab-3369-4532-8b99-18f2965a8556 --setuser ceph --setgroup ceph
Nov 24 18:20:23 compute-0 ceph-mon[74927]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:23 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3389139640' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 24 18:20:24 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:25 compute-0 competent_elion[84888]:  stderr: 2025-11-24T18:20:22.666+0000 7fa747aff740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 24 18:20:25 compute-0 competent_elion[84888]:  stderr: 2025-11-24T18:20:22.666+0000 7fa747aff740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 24 18:20:25 compute-0 competent_elion[84888]:  stderr: 2025-11-24T18:20:22.666+0000 7fa747aff740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 24 18:20:25 compute-0 competent_elion[84888]:  stderr: 2025-11-24T18:20:22.666+0000 7fa747aff740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Nov 24 18:20:25 compute-0 competent_elion[84888]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Nov 24 18:20:25 compute-0 competent_elion[84888]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 24 18:20:25 compute-0 competent_elion[84888]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Nov 24 18:20:25 compute-0 competent_elion[84888]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 24 18:20:25 compute-0 competent_elion[84888]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Nov 24 18:20:25 compute-0 competent_elion[84888]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 24 18:20:25 compute-0 competent_elion[84888]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 24 18:20:25 compute-0 competent_elion[84888]: --> ceph-volume lvm activate successful for osd ID: 2
Nov 24 18:20:25 compute-0 competent_elion[84888]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Nov 24 18:20:25 compute-0 systemd[1]: libpod-88b18c17768c2c5db871dc67d5a25fa2fd9e8709905d1dcbe5207f2cf6826ed3.scope: Deactivated successfully.
Nov 24 18:20:25 compute-0 systemd[1]: libpod-88b18c17768c2c5db871dc67d5a25fa2fd9e8709905d1dcbe5207f2cf6826ed3.scope: Consumed 5.914s CPU time.
Nov 24 18:20:25 compute-0 podman[84872]: 2025-11-24 18:20:25.251107959 +0000 UTC m=+13.114711712 container died 88b18c17768c2c5db871dc67d5a25fa2fd9e8709905d1dcbe5207f2cf6826ed3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_elion, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Nov 24 18:20:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-77de3137e9cee6f69567c41c423164b2b9265e090a7ea7ca06f9540e2561a1ec-merged.mount: Deactivated successfully.
Nov 24 18:20:25 compute-0 podman[84872]: 2025-11-24 18:20:25.326817445 +0000 UTC m=+13.190421188 container remove 88b18c17768c2c5db871dc67d5a25fa2fd9e8709905d1dcbe5207f2cf6826ed3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 18:20:25 compute-0 systemd[1]: libpod-conmon-88b18c17768c2c5db871dc67d5a25fa2fd9e8709905d1dcbe5207f2cf6826ed3.scope: Deactivated successfully.
Nov 24 18:20:25 compute-0 sudo[84766]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:25 compute-0 sudo[87738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:25 compute-0 sudo[87738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:25 compute-0 sudo[87738]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:25 compute-0 sudo[87763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:20:25 compute-0 sudo[87763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:25 compute-0 sudo[87763]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:25 compute-0 ceph-mon[74927]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:25 compute-0 sudo[87788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:25 compute-0 sudo[87788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:25 compute-0 sudo[87788]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:25 compute-0 sudo[87813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- lvm list --format json
Nov 24 18:20:25 compute-0 sudo[87813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:26 compute-0 podman[87878]: 2025-11-24 18:20:26.023366793 +0000 UTC m=+0.056594626 container create a956892acbe362964e894a1e3c26cc0968d2e355bf140a55d95902cbaee52dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:20:26 compute-0 systemd[1]: Started libpod-conmon-a956892acbe362964e894a1e3c26cc0968d2e355bf140a55d95902cbaee52dd0.scope.
Nov 24 18:20:26 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:20:26 compute-0 podman[87878]: 2025-11-24 18:20:26.001286786 +0000 UTC m=+0.034514639 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:20:26 compute-0 podman[87878]: 2025-11-24 18:20:26.110434912 +0000 UTC m=+0.143662785 container init a956892acbe362964e894a1e3c26cc0968d2e355bf140a55d95902cbaee52dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_raman, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 18:20:26 compute-0 podman[87878]: 2025-11-24 18:20:26.117604126 +0000 UTC m=+0.150831979 container start a956892acbe362964e894a1e3c26cc0968d2e355bf140a55d95902cbaee52dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_raman, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 18:20:26 compute-0 podman[87878]: 2025-11-24 18:20:26.121571498 +0000 UTC m=+0.154799401 container attach a956892acbe362964e894a1e3c26cc0968d2e355bf140a55d95902cbaee52dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_raman, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 24 18:20:26 compute-0 suspicious_raman[87894]: 167 167
Nov 24 18:20:26 compute-0 systemd[1]: libpod-a956892acbe362964e894a1e3c26cc0968d2e355bf140a55d95902cbaee52dd0.scope: Deactivated successfully.
Nov 24 18:20:26 compute-0 podman[87878]: 2025-11-24 18:20:26.124118553 +0000 UTC m=+0.157346386 container died a956892acbe362964e894a1e3c26cc0968d2e355bf140a55d95902cbaee52dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_raman, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:20:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a02e8cd951dd24155a4356d0f2b590c2bdfaa56637503df96f156b65089185a-merged.mount: Deactivated successfully.
Nov 24 18:20:26 compute-0 podman[87878]: 2025-11-24 18:20:26.160352665 +0000 UTC m=+0.193580488 container remove a956892acbe362964e894a1e3c26cc0968d2e355bf140a55d95902cbaee52dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_raman, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 18:20:26 compute-0 systemd[1]: libpod-conmon-a956892acbe362964e894a1e3c26cc0968d2e355bf140a55d95902cbaee52dd0.scope: Deactivated successfully.
Nov 24 18:20:26 compute-0 podman[87918]: 2025-11-24 18:20:26.407980611 +0000 UTC m=+0.069424976 container create 104d1b75c0253336176a85ffa5e47ded951ecb11845fd2951a1c7cbc9b6d98ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:20:26 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:26 compute-0 systemd[1]: Started libpod-conmon-104d1b75c0253336176a85ffa5e47ded951ecb11845fd2951a1c7cbc9b6d98ad.scope.
Nov 24 18:20:26 compute-0 podman[87918]: 2025-11-24 18:20:26.3822565 +0000 UTC m=+0.043700865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:20:26 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:20:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a72f9a5707404ed9ccb1440589838a7119828b9754f7e9427d6fd2ebd3ebd6d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a72f9a5707404ed9ccb1440589838a7119828b9754f7e9427d6fd2ebd3ebd6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a72f9a5707404ed9ccb1440589838a7119828b9754f7e9427d6fd2ebd3ebd6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a72f9a5707404ed9ccb1440589838a7119828b9754f7e9427d6fd2ebd3ebd6d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:26 compute-0 podman[87918]: 2025-11-24 18:20:26.527623137 +0000 UTC m=+0.189067492 container init 104d1b75c0253336176a85ffa5e47ded951ecb11845fd2951a1c7cbc9b6d98ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 24 18:20:26 compute-0 podman[87918]: 2025-11-24 18:20:26.537528562 +0000 UTC m=+0.198972897 container start 104d1b75c0253336176a85ffa5e47ded951ecb11845fd2951a1c7cbc9b6d98ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jepsen, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:20:26 compute-0 podman[87918]: 2025-11-24 18:20:26.540735824 +0000 UTC m=+0.202180179 container attach 104d1b75c0253336176a85ffa5e47ded951ecb11845fd2951a1c7cbc9b6d98ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jepsen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:20:27 compute-0 kind_jepsen[87935]: {
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:     "0": [
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:         {
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:             "devices": [
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:                 "/dev/loop3"
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:             ],
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:             "lv_name": "ceph_lv0",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:             "lv_size": "21470642176",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:             "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:             "name": "ceph_lv0",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:             "tags": {
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:                 "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:                 "ceph.cluster_name": "ceph",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:                 "ceph.crush_device_class": "",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:                 "ceph.encrypted": "0",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:                 "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:                 "ceph.osd_id": "0",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:                 "ceph.type": "block",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:                 "ceph.vdo": "0"
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:             },
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:             "type": "block",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:             "vg_name": "ceph_vg0"
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:         }
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:     ],
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:     "1": [
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:         {
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:             "devices": [
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:                 "/dev/loop4"
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:             ],
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:             "lv_name": "ceph_lv1",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:             "lv_size": "21470642176",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:             "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:             "name": "ceph_lv1",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:             "tags": {
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:                 "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:                 "ceph.cluster_name": "ceph",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:                 "ceph.crush_device_class": "",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:                 "ceph.encrypted": "0",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:                 "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:                 "ceph.osd_id": "1",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:                 "ceph.type": "block",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:                 "ceph.vdo": "0"
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:             },
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:             "type": "block",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:             "vg_name": "ceph_vg1"
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:         }
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:     ],
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:     "2": [
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:         {
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:             "devices": [
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:                 "/dev/loop5"
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:             ],
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:             "lv_name": "ceph_lv2",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:             "lv_size": "21470642176",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:             "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:             "name": "ceph_lv2",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:             "tags": {
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:                 "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:                 "ceph.cluster_name": "ceph",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:                 "ceph.crush_device_class": "",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:                 "ceph.encrypted": "0",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:                 "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:                 "ceph.osd_id": "2",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:                 "ceph.type": "block",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:                 "ceph.vdo": "0"
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:             },
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:             "type": "block",
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:             "vg_name": "ceph_vg2"
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:         }
Nov 24 18:20:27 compute-0 kind_jepsen[87935]:     ]
Nov 24 18:20:27 compute-0 kind_jepsen[87935]: }
Nov 24 18:20:27 compute-0 systemd[1]: libpod-104d1b75c0253336176a85ffa5e47ded951ecb11845fd2951a1c7cbc9b6d98ad.scope: Deactivated successfully.
Nov 24 18:20:27 compute-0 podman[87918]: 2025-11-24 18:20:27.325145531 +0000 UTC m=+0.986589956 container died 104d1b75c0253336176a85ffa5e47ded951ecb11845fd2951a1c7cbc9b6d98ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:20:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a72f9a5707404ed9ccb1440589838a7119828b9754f7e9427d6fd2ebd3ebd6d-merged.mount: Deactivated successfully.
Nov 24 18:20:27 compute-0 podman[87918]: 2025-11-24 18:20:27.382682234 +0000 UTC m=+1.044126569 container remove 104d1b75c0253336176a85ffa5e47ded951ecb11845fd2951a1c7cbc9b6d98ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jepsen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:20:27 compute-0 systemd[1]: libpod-conmon-104d1b75c0253336176a85ffa5e47ded951ecb11845fd2951a1c7cbc9b6d98ad.scope: Deactivated successfully.
Nov 24 18:20:27 compute-0 sudo[87813]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Nov 24 18:20:27 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 24 18:20:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:20:27 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:20:27 compute-0 ceph-mgr[75218]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Nov 24 18:20:27 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Nov 24 18:20:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:20:27 compute-0 sudo[87955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:27 compute-0 sudo[87955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:27 compute-0 sudo[87955]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:27 compute-0 ceph-mon[74927]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:27 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 24 18:20:27 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:20:27 compute-0 sudo[87980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:20:27 compute-0 sudo[87980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:27 compute-0 sudo[87980]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:27 compute-0 sudo[88005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:27 compute-0 sudo[88005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:27 compute-0 sudo[88005]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:27 compute-0 sudo[88030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d
Nov 24 18:20:27 compute-0 sudo[88030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:28 compute-0 podman[88096]: 2025-11-24 18:20:28.100747719 +0000 UTC m=+0.037830326 container create e39d70a8396212e99d9c64d0ed36f11c4102f44c946bd88a8a56a0ee50b9d53a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:20:28 compute-0 systemd[1]: Started libpod-conmon-e39d70a8396212e99d9c64d0ed36f11c4102f44c946bd88a8a56a0ee50b9d53a.scope.
Nov 24 18:20:28 compute-0 podman[88096]: 2025-11-24 18:20:28.084651143 +0000 UTC m=+0.021733760 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:20:28 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:20:28 compute-0 podman[88096]: 2025-11-24 18:20:28.201150085 +0000 UTC m=+0.138232722 container init e39d70a8396212e99d9c64d0ed36f11c4102f44c946bd88a8a56a0ee50b9d53a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_goldstine, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:20:28 compute-0 podman[88096]: 2025-11-24 18:20:28.21482608 +0000 UTC m=+0.151908677 container start e39d70a8396212e99d9c64d0ed36f11c4102f44c946bd88a8a56a0ee50b9d53a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:20:28 compute-0 podman[88096]: 2025-11-24 18:20:28.219342584 +0000 UTC m=+0.156425231 container attach e39d70a8396212e99d9c64d0ed36f11c4102f44c946bd88a8a56a0ee50b9d53a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:20:28 compute-0 hopeful_goldstine[88112]: 167 167
Nov 24 18:20:28 compute-0 systemd[1]: libpod-e39d70a8396212e99d9c64d0ed36f11c4102f44c946bd88a8a56a0ee50b9d53a.scope: Deactivated successfully.
Nov 24 18:20:28 compute-0 podman[88096]: 2025-11-24 18:20:28.221864508 +0000 UTC m=+0.158947135 container died e39d70a8396212e99d9c64d0ed36f11c4102f44c946bd88a8a56a0ee50b9d53a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_goldstine, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 18:20:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6953e5998dcb0509b60e291de41e43eab87ebdec19a79900b876136ecf81b44-merged.mount: Deactivated successfully.
Nov 24 18:20:28 compute-0 podman[88096]: 2025-11-24 18:20:28.264562436 +0000 UTC m=+0.201645073 container remove e39d70a8396212e99d9c64d0ed36f11c4102f44c946bd88a8a56a0ee50b9d53a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:20:28 compute-0 systemd[1]: libpod-conmon-e39d70a8396212e99d9c64d0ed36f11c4102f44c946bd88a8a56a0ee50b9d53a.scope: Deactivated successfully.
Nov 24 18:20:28 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:28 compute-0 ceph-mon[74927]: Deploying daemon osd.0 on compute-0
Nov 24 18:20:28 compute-0 podman[88143]: 2025-11-24 18:20:28.606447531 +0000 UTC m=+0.062961531 container create 8325d1650e98539ffb6fc427e6b24cbff638ef4620338ca5aab75d67ba07be9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate-test, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:20:28 compute-0 systemd[1]: Started libpod-conmon-8325d1650e98539ffb6fc427e6b24cbff638ef4620338ca5aab75d67ba07be9c.scope.
Nov 24 18:20:28 compute-0 podman[88143]: 2025-11-24 18:20:28.586470656 +0000 UTC m=+0.042984706 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:20:28 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:20:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0e73d32a09b0574fe80a51fa7600e29037095c8650082785ef4e8d311c1abf2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0e73d32a09b0574fe80a51fa7600e29037095c8650082785ef4e8d311c1abf2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0e73d32a09b0574fe80a51fa7600e29037095c8650082785ef4e8d311c1abf2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0e73d32a09b0574fe80a51fa7600e29037095c8650082785ef4e8d311c1abf2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0e73d32a09b0574fe80a51fa7600e29037095c8650082785ef4e8d311c1abf2/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:28 compute-0 podman[88143]: 2025-11-24 18:20:28.720502522 +0000 UTC m=+0.177016562 container init 8325d1650e98539ffb6fc427e6b24cbff638ef4620338ca5aab75d67ba07be9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:20:28 compute-0 podman[88143]: 2025-11-24 18:20:28.737334447 +0000 UTC m=+0.193848477 container start 8325d1650e98539ffb6fc427e6b24cbff638ef4620338ca5aab75d67ba07be9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate-test, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:20:28 compute-0 podman[88143]: 2025-11-24 18:20:28.742238741 +0000 UTC m=+0.198752771 container attach 8325d1650e98539ffb6fc427e6b24cbff638ef4620338ca5aab75d67ba07be9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate-test, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:20:29 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate-test[88160]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 24 18:20:29 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate-test[88160]:                             [--no-systemd] [--no-tmpfs]
Nov 24 18:20:29 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate-test[88160]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 24 18:20:29 compute-0 systemd[1]: libpod-8325d1650e98539ffb6fc427e6b24cbff638ef4620338ca5aab75d67ba07be9c.scope: Deactivated successfully.
Nov 24 18:20:29 compute-0 podman[88143]: 2025-11-24 18:20:29.419643879 +0000 UTC m=+0.876157889 container died 8325d1650e98539ffb6fc427e6b24cbff638ef4620338ca5aab75d67ba07be9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate-test, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:20:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0e73d32a09b0574fe80a51fa7600e29037095c8650082785ef4e8d311c1abf2-merged.mount: Deactivated successfully.
Nov 24 18:20:29 compute-0 podman[88143]: 2025-11-24 18:20:29.482345493 +0000 UTC m=+0.938859503 container remove 8325d1650e98539ffb6fc427e6b24cbff638ef4620338ca5aab75d67ba07be9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate-test, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 18:20:29 compute-0 systemd[1]: libpod-conmon-8325d1650e98539ffb6fc427e6b24cbff638ef4620338ca5aab75d67ba07be9c.scope: Deactivated successfully.
Nov 24 18:20:29 compute-0 ceph-mon[74927]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:29 compute-0 systemd[1]: Reloading.
Nov 24 18:20:29 compute-0 systemd-rc-local-generator[88222]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:20:29 compute-0 systemd-sysv-generator[88228]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:20:30 compute-0 systemd[1]: Reloading.
Nov 24 18:20:30 compute-0 systemd-rc-local-generator[88262]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:20:30 compute-0 systemd-sysv-generator[88267]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:20:30 compute-0 systemd[1]: Starting Ceph osd.0 for e5ee928f-099b-569b-93c9-ecf025cbb50d...
Nov 24 18:20:30 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:30 compute-0 podman[88321]: 2025-11-24 18:20:30.691170403 +0000 UTC m=+0.046542417 container create 92bc3a0d28fe5bcf7a8430e13c62f9639815502c401f2281f94d1cb029367eef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:20:30 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:20:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/642032cdb9a0d7e7b49e226d85f746a3a6b3fe6c3ba1886f636a251a2d2daab5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/642032cdb9a0d7e7b49e226d85f746a3a6b3fe6c3ba1886f636a251a2d2daab5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/642032cdb9a0d7e7b49e226d85f746a3a6b3fe6c3ba1886f636a251a2d2daab5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/642032cdb9a0d7e7b49e226d85f746a3a6b3fe6c3ba1886f636a251a2d2daab5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/642032cdb9a0d7e7b49e226d85f746a3a6b3fe6c3ba1886f636a251a2d2daab5/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:30 compute-0 podman[88321]: 2025-11-24 18:20:30.66968172 +0000 UTC m=+0.025053694 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:20:30 compute-0 podman[88321]: 2025-11-24 18:20:30.778427526 +0000 UTC m=+0.133799520 container init 92bc3a0d28fe5bcf7a8430e13c62f9639815502c401f2281f94d1cb029367eef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:20:30 compute-0 podman[88321]: 2025-11-24 18:20:30.789889616 +0000 UTC m=+0.145261630 container start 92bc3a0d28fe5bcf7a8430e13c62f9639815502c401f2281f94d1cb029367eef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 18:20:30 compute-0 podman[88321]: 2025-11-24 18:20:30.79440412 +0000 UTC m=+0.149776094 container attach 92bc3a0d28fe5bcf7a8430e13c62f9639815502c401f2281f94d1cb029367eef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:20:31 compute-0 ceph-mon[74927]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:31 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate[88337]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 24 18:20:31 compute-0 bash[88321]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 24 18:20:31 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate[88337]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 24 18:20:31 compute-0 bash[88321]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 24 18:20:31 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate[88337]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 24 18:20:31 compute-0 bash[88321]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 24 18:20:31 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate[88337]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 24 18:20:31 compute-0 bash[88321]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 24 18:20:31 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate[88337]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 24 18:20:31 compute-0 bash[88321]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 24 18:20:31 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate[88337]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 24 18:20:31 compute-0 bash[88321]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 24 18:20:31 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate[88337]: --> ceph-volume raw activate successful for osd ID: 0
Nov 24 18:20:31 compute-0 bash[88321]: --> ceph-volume raw activate successful for osd ID: 0
Nov 24 18:20:31 compute-0 systemd[1]: libpod-92bc3a0d28fe5bcf7a8430e13c62f9639815502c401f2281f94d1cb029367eef.scope: Deactivated successfully.
Nov 24 18:20:31 compute-0 podman[88321]: 2025-11-24 18:20:31.928625705 +0000 UTC m=+1.283997679 container died 92bc3a0d28fe5bcf7a8430e13c62f9639815502c401f2281f94d1cb029367eef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:20:31 compute-0 systemd[1]: libpod-92bc3a0d28fe5bcf7a8430e13c62f9639815502c401f2281f94d1cb029367eef.scope: Consumed 1.154s CPU time.
Nov 24 18:20:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-642032cdb9a0d7e7b49e226d85f746a3a6b3fe6c3ba1886f636a251a2d2daab5-merged.mount: Deactivated successfully.
Nov 24 18:20:31 compute-0 podman[88321]: 2025-11-24 18:20:31.995178355 +0000 UTC m=+1.350550329 container remove 92bc3a0d28fe5bcf7a8430e13c62f9639815502c401f2281f94d1cb029367eef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 18:20:32 compute-0 podman[88525]: 2025-11-24 18:20:32.258369083 +0000 UTC m=+0.051245686 container create 9c8b4f7ebd6278ab85f8ff0f61c024387fd070be0b3dda8e6c486672d394dda2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 18:20:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c56fd52e73cca515bfe6cf463897559755b409a739bc4e2953c8c1596cecbc21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c56fd52e73cca515bfe6cf463897559755b409a739bc4e2953c8c1596cecbc21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c56fd52e73cca515bfe6cf463897559755b409a739bc4e2953c8c1596cecbc21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c56fd52e73cca515bfe6cf463897559755b409a739bc4e2953c8c1596cecbc21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c56fd52e73cca515bfe6cf463897559755b409a739bc4e2953c8c1596cecbc21/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:32 compute-0 podman[88525]: 2025-11-24 18:20:32.239766143 +0000 UTC m=+0.032642756 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:20:32 compute-0 podman[88525]: 2025-11-24 18:20:32.34022396 +0000 UTC m=+0.133100573 container init 9c8b4f7ebd6278ab85f8ff0f61c024387fd070be0b3dda8e6c486672d394dda2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 24 18:20:32 compute-0 podman[88525]: 2025-11-24 18:20:32.350887679 +0000 UTC m=+0.143764312 container start 9c8b4f7ebd6278ab85f8ff0f61c024387fd070be0b3dda8e6c486672d394dda2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 18:20:32 compute-0 bash[88525]: 9c8b4f7ebd6278ab85f8ff0f61c024387fd070be0b3dda8e6c486672d394dda2
Nov 24 18:20:32 compute-0 systemd[1]: Started Ceph osd.0 for e5ee928f-099b-569b-93c9-ecf025cbb50d.
Nov 24 18:20:32 compute-0 sudo[88030]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:32 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:20:32 compute-0 ceph-osd[88544]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 18:20:32 compute-0 ceph-osd[88544]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 24 18:20:32 compute-0 ceph-osd[88544]: pidfile_write: ignore empty --pid-file
Nov 24 18:20:32 compute-0 ceph-osd[88544]: bdev(0x55ab2518b800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 24 18:20:32 compute-0 ceph-osd[88544]: bdev(0x55ab2518b800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 24 18:20:32 compute-0 ceph-osd[88544]: bdev(0x55ab2518b800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 18:20:32 compute-0 ceph-osd[88544]: bdev(0x55ab2518b800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 18:20:32 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 18:20:32 compute-0 ceph-osd[88544]: bdev(0x55ab25fc3800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 24 18:20:32 compute-0 ceph-osd[88544]: bdev(0x55ab25fc3800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 24 18:20:32 compute-0 ceph-osd[88544]: bdev(0x55ab25fc3800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 18:20:32 compute-0 ceph-osd[88544]: bdev(0x55ab25fc3800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 18:20:32 compute-0 ceph-osd[88544]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 24 18:20:32 compute-0 ceph-osd[88544]: bdev(0x55ab25fc3800 /var/lib/ceph/osd/ceph-0/block) close
Nov 24 18:20:32 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:32 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:20:32 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:32 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Nov 24 18:20:32 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 24 18:20:32 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:20:32 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:20:32 compute-0 ceph-mgr[75218]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Nov 24 18:20:32 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Nov 24 18:20:32 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:20:32 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:32 compute-0 sudo[88557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:32 compute-0 sudo[88557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:32 compute-0 sudo[88557]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:32 compute-0 sudo[88582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:20:32 compute-0 sudo[88582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:32 compute-0 sudo[88582]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:32 compute-0 sudo[88607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:32 compute-0 sudo[88607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:32 compute-0 sudo[88607]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:32 compute-0 ceph-osd[88544]: bdev(0x55ab2518b800 /var/lib/ceph/osd/ceph-0/block) close
Nov 24 18:20:32 compute-0 sudo[88632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d
Nov 24 18:20:32 compute-0 sudo[88632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:32 compute-0 ceph-osd[88544]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Nov 24 18:20:32 compute-0 ceph-osd[88544]: load: jerasure load: lrc 
Nov 24 18:20:32 compute-0 ceph-osd[88544]: bdev(0x55ab26044c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 24 18:20:32 compute-0 ceph-osd[88544]: bdev(0x55ab26044c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 24 18:20:32 compute-0 ceph-osd[88544]: bdev(0x55ab26044c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 18:20:32 compute-0 ceph-osd[88544]: bdev(0x55ab26044c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 18:20:32 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 18:20:32 compute-0 ceph-osd[88544]: bdev(0x55ab26044c00 /var/lib/ceph/osd/ceph-0/block) close
Nov 24 18:20:33 compute-0 podman[88702]: 2025-11-24 18:20:33.067493868 +0000 UTC m=+0.040409082 container create 70b21e5f38ecd82a90e3fe82a786d36c0c0f43afa3f9d4a048321ff6b982b300 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_banzai, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:20:33 compute-0 systemd[1]: Started libpod-conmon-70b21e5f38ecd82a90e3fe82a786d36c0c0f43afa3f9d4a048321ff6b982b300.scope.
Nov 24 18:20:33 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:20:33 compute-0 podman[88702]: 2025-11-24 18:20:33.049127504 +0000 UTC m=+0.022042748 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:20:33 compute-0 podman[88702]: 2025-11-24 18:20:33.159164903 +0000 UTC m=+0.132080117 container init 70b21e5f38ecd82a90e3fe82a786d36c0c0f43afa3f9d4a048321ff6b982b300 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_banzai, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:20:33 compute-0 podman[88702]: 2025-11-24 18:20:33.169153625 +0000 UTC m=+0.142068839 container start 70b21e5f38ecd82a90e3fe82a786d36c0c0f43afa3f9d4a048321ff6b982b300 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_banzai, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:20:33 compute-0 keen_banzai[88718]: 167 167
Nov 24 18:20:33 compute-0 systemd[1]: libpod-70b21e5f38ecd82a90e3fe82a786d36c0c0f43afa3f9d4a048321ff6b982b300.scope: Deactivated successfully.
Nov 24 18:20:33 compute-0 podman[88702]: 2025-11-24 18:20:33.176823159 +0000 UTC m=+0.149738393 container attach 70b21e5f38ecd82a90e3fe82a786d36c0c0f43afa3f9d4a048321ff6b982b300 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 24 18:20:33 compute-0 podman[88702]: 2025-11-24 18:20:33.177617199 +0000 UTC m=+0.150532413 container died 70b21e5f38ecd82a90e3fe82a786d36c0c0f43afa3f9d4a048321ff6b982b300 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_banzai, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Nov 24 18:20:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc497a1f9620ff4f7672f9b5b0fd88d2078ab24d9ae2c28919cbd94defb8bc53-merged.mount: Deactivated successfully.
Nov 24 18:20:33 compute-0 podman[88702]: 2025-11-24 18:20:33.218625595 +0000 UTC m=+0.191540819 container remove 70b21e5f38ecd82a90e3fe82a786d36c0c0f43afa3f9d4a048321ff6b982b300 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_banzai, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:20:33 compute-0 ceph-osd[88544]: bdev(0x55ab26044c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 24 18:20:33 compute-0 ceph-osd[88544]: bdev(0x55ab26044c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 24 18:20:33 compute-0 ceph-osd[88544]: bdev(0x55ab26044c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 18:20:33 compute-0 ceph-osd[88544]: bdev(0x55ab26044c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 18:20:33 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 18:20:33 compute-0 ceph-osd[88544]: bdev(0x55ab26044c00 /var/lib/ceph/osd/ceph-0/block) close
Nov 24 18:20:33 compute-0 systemd[1]: libpod-conmon-70b21e5f38ecd82a90e3fe82a786d36c0c0f43afa3f9d4a048321ff6b982b300.scope: Deactivated successfully.
Nov 24 18:20:33 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:33 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:33 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 24 18:20:33 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:20:33 compute-0 ceph-mon[74927]: Deploying daemon osd.1 on compute-0
Nov 24 18:20:33 compute-0 ceph-mon[74927]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:33 compute-0 podman[88753]: 2025-11-24 18:20:33.496627056 +0000 UTC m=+0.053937633 container create 3e24f763da7dfee426392b2c59c43d5113108daf4c1c63bf073a71a902c368dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate-test, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:20:33 compute-0 ceph-osd[88544]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 24 18:20:33 compute-0 ceph-osd[88544]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 24 18:20:33 compute-0 ceph-osd[88544]: bdev(0x55ab26044c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 24 18:20:33 compute-0 ceph-osd[88544]: bdev(0x55ab26044c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 24 18:20:33 compute-0 ceph-osd[88544]: bdev(0x55ab26044c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 18:20:33 compute-0 ceph-osd[88544]: bdev(0x55ab26044c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 18:20:33 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 18:20:33 compute-0 ceph-osd[88544]: bdev(0x55ab26045400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 24 18:20:33 compute-0 ceph-osd[88544]: bdev(0x55ab26045400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 24 18:20:33 compute-0 ceph-osd[88544]: bdev(0x55ab26045400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 18:20:33 compute-0 ceph-osd[88544]: bdev(0x55ab26045400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 18:20:33 compute-0 ceph-osd[88544]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 24 18:20:33 compute-0 ceph-osd[88544]: bluefs mount
Nov 24 18:20:33 compute-0 ceph-osd[88544]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: bluefs mount shared_bdev_used = 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: RocksDB version: 7.9.2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Git sha 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: DB SUMMARY
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: DB Session ID:  4HGE9OKKRCKBG2QLOBGS
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: CURRENT file:  CURRENT
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: IDENTITY file:  IDENTITY
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                         Options.error_if_exists: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                       Options.create_if_missing: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                         Options.paranoid_checks: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                                     Options.env: 0x55ab26015d50
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                                Options.info_log: 0x55ab252127e0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.max_file_opening_threads: 16
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                              Options.statistics: (nil)
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                               Options.use_fsync: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                       Options.max_log_file_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                         Options.allow_fallocate: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.use_direct_reads: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.create_missing_column_families: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                              Options.db_log_dir: 
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                                 Options.wal_dir: db.wal
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.advise_random_on_open: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.write_buffer_manager: 0x55ab2611e460
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                            Options.rate_limiter: (nil)
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.unordered_write: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                               Options.row_cache: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                              Options.wal_filter: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.allow_ingest_behind: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.two_write_queues: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.manual_wal_flush: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.wal_compression: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.atomic_flush: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                 Options.log_readahead_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.allow_data_in_errors: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.db_host_id: __hostname__
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.max_background_jobs: 4
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.max_background_compactions: -1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.max_subcompactions: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.max_open_files: -1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.bytes_per_sync: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.max_background_flushes: -1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Compression algorithms supported:
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         kZSTD supported: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         kXpressCompression supported: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         kBZip2Compression supported: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         kLZ4Compression supported: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         kZlibCompression supported: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         kLZ4HCCompression supported: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         kSnappyCompression supported: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab25212200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ab251ff1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab25212200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ab251ff1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab25212200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ab251ff1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab25212200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ab251ff1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab25212200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ab251ff1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab25212200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ab251ff1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab25212200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ab251ff1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab25212180)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ab251ff090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab25212180)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ab251ff090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab25212180)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ab251ff090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 24 18:20:33 compute-0 systemd[1]: Started libpod-conmon-3e24f763da7dfee426392b2c59c43d5113108daf4c1c63bf073a71a902c368dc.scope.
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: c88edec1-a146-434a-86d2-25bed20784f7
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008433535761, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008433536087, "job": 1, "event": "recovery_finished"}
Nov 24 18:20:33 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Nov 24 18:20:33 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Nov 24 18:20:33 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 24 18:20:33 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: freelist init
Nov 24 18:20:33 compute-0 ceph-osd[88544]: freelist _read_cfg
Nov 24 18:20:33 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 24 18:20:33 compute-0 ceph-osd[88544]: bluefs umount
Nov 24 18:20:33 compute-0 ceph-osd[88544]: bdev(0x55ab26045400 /var/lib/ceph/osd/ceph-0/block) close
Nov 24 18:20:33 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:20:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2967fed2a32ba5be3838bf6389ae38c7e34e201481bad1be49150d1392790ce7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2967fed2a32ba5be3838bf6389ae38c7e34e201481bad1be49150d1392790ce7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2967fed2a32ba5be3838bf6389ae38c7e34e201481bad1be49150d1392790ce7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2967fed2a32ba5be3838bf6389ae38c7e34e201481bad1be49150d1392790ce7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2967fed2a32ba5be3838bf6389ae38c7e34e201481bad1be49150d1392790ce7/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:33 compute-0 podman[88753]: 2025-11-24 18:20:33.47817527 +0000 UTC m=+0.035485867 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:20:33 compute-0 podman[88753]: 2025-11-24 18:20:33.631245986 +0000 UTC m=+0.188556583 container init 3e24f763da7dfee426392b2c59c43d5113108daf4c1c63bf073a71a902c368dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate-test, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:20:33 compute-0 podman[88753]: 2025-11-24 18:20:33.637829162 +0000 UTC m=+0.195139739 container start 3e24f763da7dfee426392b2c59c43d5113108daf4c1c63bf073a71a902c368dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:20:33 compute-0 podman[88753]: 2025-11-24 18:20:33.644034099 +0000 UTC m=+0.201344676 container attach 3e24f763da7dfee426392b2c59c43d5113108daf4c1c63bf073a71a902c368dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:20:33 compute-0 ceph-osd[88544]: bdev(0x55ab26045400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 24 18:20:33 compute-0 ceph-osd[88544]: bdev(0x55ab26045400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 24 18:20:33 compute-0 ceph-osd[88544]: bdev(0x55ab26045400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 18:20:33 compute-0 ceph-osd[88544]: bdev(0x55ab26045400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 18:20:33 compute-0 ceph-osd[88544]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 24 18:20:33 compute-0 ceph-osd[88544]: bluefs mount
Nov 24 18:20:33 compute-0 ceph-osd[88544]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: bluefs mount shared_bdev_used = 4718592
Nov 24 18:20:33 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: RocksDB version: 7.9.2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Git sha 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: DB SUMMARY
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: DB Session ID:  4HGE9OKKRCKBG2QLOBGT
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: CURRENT file:  CURRENT
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: IDENTITY file:  IDENTITY
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                         Options.error_if_exists: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                       Options.create_if_missing: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                         Options.paranoid_checks: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                                     Options.env: 0x55ab261ae1c0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                                Options.info_log: 0x55ab26011700
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.max_file_opening_threads: 16
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                              Options.statistics: (nil)
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                               Options.use_fsync: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                       Options.max_log_file_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                         Options.allow_fallocate: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.use_direct_reads: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.create_missing_column_families: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                              Options.db_log_dir: 
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                                 Options.wal_dir: db.wal
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.advise_random_on_open: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.write_buffer_manager: 0x55ab2611e6e0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                            Options.rate_limiter: (nil)
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.unordered_write: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                               Options.row_cache: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                              Options.wal_filter: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.allow_ingest_behind: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.two_write_queues: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.manual_wal_flush: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.wal_compression: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.atomic_flush: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                 Options.log_readahead_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.allow_data_in_errors: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.db_host_id: __hostname__
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.max_background_jobs: 4
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.max_background_compactions: -1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.max_subcompactions: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.max_open_files: -1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.bytes_per_sync: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.max_background_flushes: -1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Compression algorithms supported:
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         kZSTD supported: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         kXpressCompression supported: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         kBZip2Compression supported: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         kLZ4Compression supported: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         kZlibCompression supported: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         kLZ4HCCompression supported: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         kSnappyCompression supported: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab25208f80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ab251ff1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab25208f80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ab251ff1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab25208f80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ab251ff1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab25208f80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ab251ff1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab25208f80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ab251ff1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab25208f80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ab251ff1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab25208f80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ab251ff1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab25208fe0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ab251ff090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab25208fe0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ab251ff090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab25208fe0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55ab251ff090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: c88edec1-a146-434a-86d2-25bed20784f7
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008433816123, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008433823065, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008433, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c88edec1-a146-434a-86d2-25bed20784f7", "db_session_id": "4HGE9OKKRCKBG2QLOBGT", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008433835596, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1593, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 467, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008433, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c88edec1-a146-434a-86d2-25bed20784f7", "db_session_id": "4HGE9OKKRCKBG2QLOBGT", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008433852301, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008433, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c88edec1-a146-434a-86d2-25bed20784f7", "db_session_id": "4HGE9OKKRCKBG2QLOBGT", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008433855218, "job": 1, "event": "recovery_finished"}
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55ab2536dc00
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: DB pointer 0x55ab26107a00
Nov 24 18:20:33 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 24 18:20:33 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Nov 24 18:20:33 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 18:20:33 compute-0 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 18:20:33 compute-0 ceph-osd[88544]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 24 18:20:33 compute-0 ceph-osd[88544]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 24 18:20:33 compute-0 ceph-osd[88544]: _get_class not permitted to load lua
Nov 24 18:20:33 compute-0 ceph-osd[88544]: _get_class not permitted to load sdk
Nov 24 18:20:33 compute-0 ceph-osd[88544]: _get_class not permitted to load test_remote_reads
Nov 24 18:20:33 compute-0 ceph-osd[88544]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 24 18:20:33 compute-0 ceph-osd[88544]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 24 18:20:33 compute-0 ceph-osd[88544]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 24 18:20:33 compute-0 ceph-osd[88544]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 24 18:20:33 compute-0 ceph-osd[88544]: osd.0 0 load_pgs
Nov 24 18:20:33 compute-0 ceph-osd[88544]: osd.0 0 load_pgs opened 0 pgs
Nov 24 18:20:33 compute-0 ceph-osd[88544]: osd.0 0 log_to_monitors true
Nov 24 18:20:33 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0[88540]: 2025-11-24T18:20:33.952+0000 7fc52ebf4740 -1 osd.0 0 log_to_monitors true
Nov 24 18:20:33 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Nov 24 18:20:33 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3004045453,v1:192.168.122.100:6803/3004045453]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 24 18:20:34 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate-test[88964]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 24 18:20:34 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate-test[88964]:                             [--no-systemd] [--no-tmpfs]
Nov 24 18:20:34 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate-test[88964]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 24 18:20:34 compute-0 systemd[1]: libpod-3e24f763da7dfee426392b2c59c43d5113108daf4c1c63bf073a71a902c368dc.scope: Deactivated successfully.
Nov 24 18:20:34 compute-0 podman[88753]: 2025-11-24 18:20:34.264396197 +0000 UTC m=+0.821706774 container died 3e24f763da7dfee426392b2c59c43d5113108daf4c1c63bf073a71a902c368dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate-test, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:20:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-2967fed2a32ba5be3838bf6389ae38c7e34e201481bad1be49150d1392790ce7-merged.mount: Deactivated successfully.
Nov 24 18:20:34 compute-0 podman[88753]: 2025-11-24 18:20:34.426758687 +0000 UTC m=+0.984069304 container remove 3e24f763da7dfee426392b2c59c43d5113108daf4c1c63bf073a71a902c368dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate-test, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 18:20:34 compute-0 systemd[1]: libpod-conmon-3e24f763da7dfee426392b2c59c43d5113108daf4c1c63bf073a71a902c368dc.scope: Deactivated successfully.
Nov 24 18:20:34 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:34 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Nov 24 18:20:34 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 18:20:34 compute-0 ceph-mon[74927]: from='osd.0 [v2:192.168.122.100:6802/3004045453,v1:192.168.122.100:6803/3004045453]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 24 18:20:34 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3004045453,v1:192.168.122.100:6803/3004045453]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 24 18:20:34 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Nov 24 18:20:34 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Nov 24 18:20:34 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 24 18:20:34 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3004045453,v1:192.168.122.100:6803/3004045453]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 24 18:20:34 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 24 18:20:34 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 18:20:34 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 18:20:34 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 18:20:34 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 18:20:34 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 18:20:34 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 18:20:34 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 18:20:34 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 18:20:34 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 18:20:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:20:34
Nov 24 18:20:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:20:34 compute-0 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 18:20:34 compute-0 ceph-mgr[75218]: [balancer INFO root] No pools available
Nov 24 18:20:34 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:20:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:20:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:20:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:20:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:20:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:20:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:20:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:20:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:20:34 compute-0 systemd[1]: Reloading.
Nov 24 18:20:34 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 24 18:20:34 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 24 18:20:34 compute-0 systemd-rc-local-generator[89241]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:20:34 compute-0 systemd-sysv-generator[89245]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:20:35 compute-0 systemd[1]: Reloading.
Nov 24 18:20:35 compute-0 systemd-rc-local-generator[89283]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:20:35 compute-0 systemd-sysv-generator[89286]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:20:35 compute-0 systemd[1]: Starting Ceph osd.1 for e5ee928f-099b-569b-93c9-ecf025cbb50d...
Nov 24 18:20:35 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Nov 24 18:20:35 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 18:20:35 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3004045453,v1:192.168.122.100:6803/3004045453]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 24 18:20:35 compute-0 ceph-osd[88544]: osd.0 0 done with init, starting boot process
Nov 24 18:20:35 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Nov 24 18:20:35 compute-0 ceph-osd[88544]: osd.0 0 start_boot
Nov 24 18:20:35 compute-0 ceph-osd[88544]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 24 18:20:35 compute-0 ceph-osd[88544]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 24 18:20:35 compute-0 ceph-osd[88544]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 24 18:20:35 compute-0 ceph-osd[88544]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 24 18:20:35 compute-0 ceph-osd[88544]: osd.0 0  bench count 12288000 bsize 4 KiB
Nov 24 18:20:35 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Nov 24 18:20:35 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 18:20:35 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 18:20:35 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 18:20:35 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 18:20:35 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 18:20:35 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 18:20:35 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 18:20:35 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 18:20:35 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 18:20:35 compute-0 ceph-mon[74927]: pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:35 compute-0 ceph-mon[74927]: from='osd.0 [v2:192.168.122.100:6802/3004045453,v1:192.168.122.100:6803/3004045453]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 24 18:20:35 compute-0 ceph-mon[74927]: osdmap e7: 3 total, 0 up, 3 in
Nov 24 18:20:35 compute-0 ceph-mon[74927]: from='osd.0 [v2:192.168.122.100:6802/3004045453,v1:192.168.122.100:6803/3004045453]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 24 18:20:35 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 18:20:35 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 18:20:35 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 18:20:35 compute-0 ceph-mgr[75218]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3004045453; not ready for session (expect reconnect)
Nov 24 18:20:35 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 18:20:35 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 18:20:35 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 18:20:35 compute-0 podman[89342]: 2025-11-24 18:20:35.664538377 +0000 UTC m=+0.054204910 container create aa7a00067d0094daa4df9709a8fae2182634f4f4bd1561cc83217b3cda347f0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:20:35 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:20:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff3a2b244f8723007f4882ddc7c135909d801d3c3276ef49ed2f7d8d198bd69b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff3a2b244f8723007f4882ddc7c135909d801d3c3276ef49ed2f7d8d198bd69b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff3a2b244f8723007f4882ddc7c135909d801d3c3276ef49ed2f7d8d198bd69b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff3a2b244f8723007f4882ddc7c135909d801d3c3276ef49ed2f7d8d198bd69b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff3a2b244f8723007f4882ddc7c135909d801d3c3276ef49ed2f7d8d198bd69b/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:35 compute-0 podman[89342]: 2025-11-24 18:20:35.639170677 +0000 UTC m=+0.028837240 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:20:35 compute-0 podman[89342]: 2025-11-24 18:20:35.816138526 +0000 UTC m=+0.205805089 container init aa7a00067d0094daa4df9709a8fae2182634f4f4bd1561cc83217b3cda347f0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Nov 24 18:20:35 compute-0 podman[89342]: 2025-11-24 18:20:35.822478186 +0000 UTC m=+0.212144729 container start aa7a00067d0094daa4df9709a8fae2182634f4f4bd1561cc83217b3cda347f0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:20:35 compute-0 podman[89342]: 2025-11-24 18:20:35.856936416 +0000 UTC m=+0.246602969 container attach aa7a00067d0094daa4df9709a8fae2182634f4f4bd1561cc83217b3cda347f0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:20:36 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:36 compute-0 ceph-mgr[75218]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3004045453; not ready for session (expect reconnect)
Nov 24 18:20:36 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 18:20:36 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 18:20:36 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 18:20:36 compute-0 ceph-mon[74927]: from='osd.0 [v2:192.168.122.100:6802/3004045453,v1:192.168.122.100:6803/3004045453]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 24 18:20:36 compute-0 ceph-mon[74927]: osdmap e8: 3 total, 0 up, 3 in
Nov 24 18:20:36 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 18:20:36 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 18:20:36 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 18:20:36 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 18:20:36 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate[89357]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 24 18:20:36 compute-0 bash[89342]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 24 18:20:36 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate[89357]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 24 18:20:36 compute-0 bash[89342]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 24 18:20:36 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate[89357]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 24 18:20:36 compute-0 bash[89342]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 24 18:20:36 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate[89357]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 24 18:20:36 compute-0 bash[89342]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 24 18:20:36 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate[89357]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 24 18:20:36 compute-0 bash[89342]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 24 18:20:36 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate[89357]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 24 18:20:36 compute-0 bash[89342]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 24 18:20:36 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate[89357]: --> ceph-volume raw activate successful for osd ID: 1
Nov 24 18:20:36 compute-0 bash[89342]: --> ceph-volume raw activate successful for osd ID: 1
Nov 24 18:20:36 compute-0 systemd[1]: libpod-aa7a00067d0094daa4df9709a8fae2182634f4f4bd1561cc83217b3cda347f0a.scope: Deactivated successfully.
Nov 24 18:20:36 compute-0 systemd[1]: libpod-aa7a00067d0094daa4df9709a8fae2182634f4f4bd1561cc83217b3cda347f0a.scope: Consumed 1.101s CPU time.
Nov 24 18:20:36 compute-0 conmon[89357]: conmon aa7a00067d0094daa4df <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aa7a00067d0094daa4df9709a8fae2182634f4f4bd1561cc83217b3cda347f0a.scope/container/memory.events
Nov 24 18:20:36 compute-0 podman[89342]: 2025-11-24 18:20:36.925536695 +0000 UTC m=+1.315203218 container died aa7a00067d0094daa4df9709a8fae2182634f4f4bd1561cc83217b3cda347f0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:20:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff3a2b244f8723007f4882ddc7c135909d801d3c3276ef49ed2f7d8d198bd69b-merged.mount: Deactivated successfully.
Nov 24 18:20:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:20:37 compute-0 ceph-mgr[75218]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3004045453; not ready for session (expect reconnect)
Nov 24 18:20:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 18:20:37 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 18:20:37 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 18:20:37 compute-0 podman[89342]: 2025-11-24 18:20:37.606728769 +0000 UTC m=+1.996395332 container remove aa7a00067d0094daa4df9709a8fae2182634f4f4bd1561cc83217b3cda347f0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:20:37 compute-0 ceph-mon[74927]: purged_snaps scrub starts
Nov 24 18:20:37 compute-0 ceph-mon[74927]: purged_snaps scrub ok
Nov 24 18:20:37 compute-0 ceph-mon[74927]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:37 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 18:20:37 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 18:20:37 compute-0 podman[89537]: 2025-11-24 18:20:37.903654838 +0000 UTC m=+0.099838012 container create edbd9c794ff6da0dfcdb98ed6aaaf0ca5ebc8143fc908cf4f59185fafabf5dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:20:37 compute-0 podman[89537]: 2025-11-24 18:20:37.839610481 +0000 UTC m=+0.035793665 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:20:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9091d7b35ccc3e5ef83d0c61ea1a3f7f6bcce05a3a37047686ce2a2e7105ae38/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9091d7b35ccc3e5ef83d0c61ea1a3f7f6bcce05a3a37047686ce2a2e7105ae38/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9091d7b35ccc3e5ef83d0c61ea1a3f7f6bcce05a3a37047686ce2a2e7105ae38/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9091d7b35ccc3e5ef83d0c61ea1a3f7f6bcce05a3a37047686ce2a2e7105ae38/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9091d7b35ccc3e5ef83d0c61ea1a3f7f6bcce05a3a37047686ce2a2e7105ae38/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:38 compute-0 podman[89537]: 2025-11-24 18:20:38.181291839 +0000 UTC m=+0.377475063 container init edbd9c794ff6da0dfcdb98ed6aaaf0ca5ebc8143fc908cf4f59185fafabf5dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:20:38 compute-0 podman[89537]: 2025-11-24 18:20:38.194940514 +0000 UTC m=+0.391123688 container start edbd9c794ff6da0dfcdb98ed6aaaf0ca5ebc8143fc908cf4f59185fafabf5dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 18:20:38 compute-0 sudo[89578]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbizdqegqgfzpqrbfmeqfeyawmjqntsq ; /usr/bin/python3'
Nov 24 18:20:38 compute-0 sudo[89578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:20:38 compute-0 ceph-osd[89581]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 18:20:38 compute-0 ceph-osd[89581]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 24 18:20:38 compute-0 ceph-osd[89581]: pidfile_write: ignore empty --pid-file
Nov 24 18:20:38 compute-0 ceph-osd[89581]: bdev(0x560b4055b800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 18:20:38 compute-0 ceph-osd[89581]: bdev(0x560b4055b800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 18:20:38 compute-0 ceph-osd[89581]: bdev(0x560b4055b800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 18:20:38 compute-0 ceph-osd[89581]: bdev(0x560b4055b800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 18:20:38 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 18:20:38 compute-0 ceph-osd[89581]: bdev(0x560b41393800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 18:20:38 compute-0 ceph-osd[89581]: bdev(0x560b41393800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 18:20:38 compute-0 ceph-osd[89581]: bdev(0x560b41393800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 18:20:38 compute-0 ceph-osd[89581]: bdev(0x560b41393800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 18:20:38 compute-0 ceph-osd[89581]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 24 18:20:38 compute-0 ceph-osd[89581]: bdev(0x560b41393800 /var/lib/ceph/osd/ceph-1/block) close
Nov 24 18:20:38 compute-0 ceph-osd[89581]: bdev(0x560b4055b800 /var/lib/ceph/osd/ceph-1/block) close
Nov 24 18:20:38 compute-0 bash[89537]: edbd9c794ff6da0dfcdb98ed6aaaf0ca5ebc8143fc908cf4f59185fafabf5dd8
Nov 24 18:20:38 compute-0 systemd[1]: Started Ceph osd.1 for e5ee928f-099b-569b-93c9-ecf025cbb50d.
Nov 24 18:20:38 compute-0 python3[89582]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:20:38 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:38 compute-0 ceph-osd[89581]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Nov 24 18:20:38 compute-0 ceph-osd[89581]: load: jerasure load: lrc 
Nov 24 18:20:38 compute-0 ceph-osd[89581]: bdev(0x560b41414c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 18:20:38 compute-0 ceph-osd[89581]: bdev(0x560b41414c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 18:20:38 compute-0 ceph-osd[89581]: bdev(0x560b41414c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 18:20:38 compute-0 ceph-mgr[75218]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3004045453; not ready for session (expect reconnect)
Nov 24 18:20:38 compute-0 ceph-osd[89581]: bdev(0x560b41414c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 18:20:38 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 18:20:38 compute-0 ceph-osd[89581]: bdev(0x560b41414c00 /var/lib/ceph/osd/ceph-1/block) close
Nov 24 18:20:38 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 18:20:38 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 18:20:38 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 18:20:38 compute-0 podman[89598]: 2025-11-24 18:20:38.50404085 +0000 UTC m=+0.035638421 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:20:38 compute-0 sudo[88632]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:38 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:20:38 compute-0 podman[89598]: 2025-11-24 18:20:38.738994984 +0000 UTC m=+0.270592495 container create 57a4ec467aec38b579ca04ca1b25e29b8c22f3f8f1d4ae950daea2e89aece526 (image=quay.io/ceph/ceph:v18, name=adoring_jackson, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:20:38 compute-0 ceph-osd[89581]: bdev(0x560b41414c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 18:20:38 compute-0 ceph-osd[89581]: bdev(0x560b41414c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 18:20:38 compute-0 ceph-osd[89581]: bdev(0x560b41414c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 18:20:38 compute-0 ceph-osd[89581]: bdev(0x560b41414c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 18:20:38 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 18:20:38 compute-0 ceph-osd[89581]: bdev(0x560b41414c00 /var/lib/ceph/osd/ceph-1/block) close
Nov 24 18:20:38 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:38 compute-0 systemd[1]: Started libpod-conmon-57a4ec467aec38b579ca04ca1b25e29b8c22f3f8f1d4ae950daea2e89aece526.scope.
Nov 24 18:20:39 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:20:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a65c08a2c6a8cddee7844bd1a3d314cfee7dcb63eca1f6886140b1c5a4331fb4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a65c08a2c6a8cddee7844bd1a3d314cfee7dcb63eca1f6886140b1c5a4331fb4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a65c08a2c6a8cddee7844bd1a3d314cfee7dcb63eca1f6886140b1c5a4331fb4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:39 compute-0 ceph-osd[89581]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 24 18:20:39 compute-0 ceph-osd[89581]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 24 18:20:39 compute-0 ceph-osd[89581]: bdev(0x560b41414c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 18:20:39 compute-0 ceph-osd[89581]: bdev(0x560b41414c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 18:20:39 compute-0 ceph-osd[89581]: bdev(0x560b41414c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 18:20:39 compute-0 ceph-osd[89581]: bdev(0x560b41414c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 18:20:39 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 18:20:39 compute-0 ceph-osd[89581]: bdev(0x560b41415400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 18:20:39 compute-0 ceph-osd[89581]: bdev(0x560b41415400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 18:20:39 compute-0 ceph-osd[89581]: bdev(0x560b41415400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 18:20:39 compute-0 ceph-osd[89581]: bdev(0x560b41415400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 18:20:39 compute-0 ceph-osd[89581]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 24 18:20:39 compute-0 ceph-osd[89581]: bluefs mount
Nov 24 18:20:39 compute-0 ceph-osd[89581]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: bluefs mount shared_bdev_used = 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: RocksDB version: 7.9.2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Git sha 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: DB SUMMARY
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: DB Session ID:  M68LIBJHY0K5KHYYLOTW
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: CURRENT file:  CURRENT
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: IDENTITY file:  IDENTITY
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                         Options.error_if_exists: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                       Options.create_if_missing: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                         Options.paranoid_checks: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                                     Options.env: 0x560b413e5c70
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                                Options.info_log: 0x560b405e28a0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.max_file_opening_threads: 16
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                              Options.statistics: (nil)
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                               Options.use_fsync: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                       Options.max_log_file_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                         Options.allow_fallocate: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.use_direct_reads: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.create_missing_column_families: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                              Options.db_log_dir: 
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                                 Options.wal_dir: db.wal
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.advise_random_on_open: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.write_buffer_manager: 0x560b414ee460
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 24 18:20:39 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                            Options.rate_limiter: (nil)
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.unordered_write: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                               Options.row_cache: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                              Options.wal_filter: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.allow_ingest_behind: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.two_write_queues: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.manual_wal_flush: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.wal_compression: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.atomic_flush: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                 Options.log_readahead_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.allow_data_in_errors: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.db_host_id: __hostname__
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.max_background_jobs: 4
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.max_background_compactions: -1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.max_subcompactions: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.max_open_files: -1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.bytes_per_sync: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.max_background_flushes: -1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Compression algorithms supported:
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         kZSTD supported: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         kXpressCompression supported: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         kBZip2Compression supported: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         kLZ4Compression supported: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         kZlibCompression supported: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         kLZ4HCCompression supported: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         kSnappyCompression supported: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b405e22c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560b405cf1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b405e22c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560b405cf1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b405e22c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560b405cf1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b405e22c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560b405cf1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b405e22c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560b405cf1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b405e22c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560b405cf1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b405e22c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560b405cf1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b405e2240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560b405cf090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b405e2240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560b405cf090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b405e2240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560b405cf090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 51802cb1-f710-439e-8cb3-c13c7c81f345
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008439106949, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008439107157, "job": 1, "event": "recovery_finished"}
Nov 24 18:20:39 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Nov 24 18:20:39 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Nov 24 18:20:39 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 24 18:20:39 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: freelist init
Nov 24 18:20:39 compute-0 ceph-osd[89581]: freelist _read_cfg
Nov 24 18:20:39 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 24 18:20:39 compute-0 ceph-osd[89581]: bluefs umount
Nov 24 18:20:39 compute-0 ceph-osd[89581]: bdev(0x560b41415400 /var/lib/ceph/osd/ceph-1/block) close
Nov 24 18:20:39 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 18:20:39 compute-0 podman[89598]: 2025-11-24 18:20:39.252273207 +0000 UTC m=+0.783870728 container init 57a4ec467aec38b579ca04ca1b25e29b8c22f3f8f1d4ae950daea2e89aece526 (image=quay.io/ceph/ceph:v18, name=adoring_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 18:20:39 compute-0 podman[89598]: 2025-11-24 18:20:39.262755672 +0000 UTC m=+0.794353173 container start 57a4ec467aec38b579ca04ca1b25e29b8c22f3f8f1d4ae950daea2e89aece526 (image=quay.io/ceph/ceph:v18, name=adoring_jackson, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:20:39 compute-0 ceph-osd[89581]: bdev(0x560b41415400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 18:20:39 compute-0 ceph-osd[89581]: bdev(0x560b41415400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 18:20:39 compute-0 ceph-osd[89581]: bdev(0x560b41415400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 18:20:39 compute-0 ceph-osd[89581]: bdev(0x560b41415400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 18:20:39 compute-0 ceph-osd[89581]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 24 18:20:39 compute-0 ceph-osd[89581]: bluefs mount
Nov 24 18:20:39 compute-0 ceph-osd[89581]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: bluefs mount shared_bdev_used = 4718592
Nov 24 18:20:39 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: RocksDB version: 7.9.2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Git sha 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: DB SUMMARY
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: DB Session ID:  M68LIBJHY0K5KHYYLOTX
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: CURRENT file:  CURRENT
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: IDENTITY file:  IDENTITY
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                         Options.error_if_exists: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                       Options.create_if_missing: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                         Options.paranoid_checks: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                                     Options.env: 0x560b41596b60
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                                Options.info_log: 0x560b405e2600
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.max_file_opening_threads: 16
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                              Options.statistics: (nil)
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                               Options.use_fsync: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                       Options.max_log_file_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                         Options.allow_fallocate: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.use_direct_reads: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.create_missing_column_families: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                              Options.db_log_dir: 
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                                 Options.wal_dir: db.wal
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.advise_random_on_open: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.write_buffer_manager: 0x560b414ee460
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                            Options.rate_limiter: (nil)
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.unordered_write: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                               Options.row_cache: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                              Options.wal_filter: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.allow_ingest_behind: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.two_write_queues: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.manual_wal_flush: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.wal_compression: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.atomic_flush: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                 Options.log_readahead_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.allow_data_in_errors: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.db_host_id: __hostname__
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.max_background_jobs: 4
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.max_background_compactions: -1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.max_subcompactions: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.max_open_files: -1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.bytes_per_sync: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.max_background_flushes: -1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Compression algorithms supported:
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         kZSTD supported: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         kXpressCompression supported: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         kBZip2Compression supported: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         kLZ4Compression supported: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         kZlibCompression supported: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         kLZ4HCCompression supported: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         kSnappyCompression supported: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b405e2a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560b405cf1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b405e2a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560b405cf1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b405e2a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560b405cf1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b405e2a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560b405cf1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b405e2a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560b405cf1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b405e2a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560b405cf1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b405e2a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560b405cf1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b405e2380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560b405cf090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b405e2380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560b405cf090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b405e2380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560b405cf090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 51802cb1-f710-439e-8cb3-c13c7c81f345
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008439376742, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 24 18:20:39 compute-0 ceph-mgr[75218]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3004045453; not ready for session (expect reconnect)
Nov 24 18:20:39 compute-0 podman[89598]: 2025-11-24 18:20:39.563030486 +0000 UTC m=+1.094627977 container attach 57a4ec467aec38b579ca04ca1b25e29b8c22f3f8f1d4ae950daea2e89aece526 (image=quay.io/ceph/ceph:v18, name=adoring_jackson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Nov 24 18:20:39 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 18:20:39 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 18:20:39 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008439741429, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008439, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "51802cb1-f710-439e-8cb3-c13c7c81f345", "db_session_id": "M68LIBJHY0K5KHYYLOTX", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 24 18:20:39 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008439746577, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008439, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "51802cb1-f710-439e-8cb3-c13c7c81f345", "db_session_id": "M68LIBJHY0K5KHYYLOTX", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 24 18:20:39 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Nov 24 18:20:39 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 24 18:20:39 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:20:39 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:20:39 compute-0 ceph-mgr[75218]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Nov 24 18:20:39 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008439801336, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008439, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "51802cb1-f710-439e-8cb3-c13c7c81f345", "db_session_id": "M68LIBJHY0K5KHYYLOTX", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 24 18:20:39 compute-0 sudo[90023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:39 compute-0 sudo[90023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:39 compute-0 sudo[90023]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:39 compute-0 sudo[90048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:20:39 compute-0 sudo[90048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:39 compute-0 sudo[90048]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:39 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 24 18:20:39 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2656158072' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 24 18:20:39 compute-0 adoring_jackson[89624]: 
Nov 24 18:20:39 compute-0 adoring_jackson[89624]: {"fsid":"e5ee928f-099b-569b-93c9-ecf025cbb50d","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":112,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":8,"num_osds":3,"num_up_osds":0,"osd_up_since":0,"num_in_osds":3,"osd_in_since":1764008421,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-24T18:20:36.466398+0000","services":{}},"progress_events":{}}
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008439896810, "job": 1, "event": "recovery_finished"}
Nov 24 18:20:39 compute-0 ceph-osd[89581]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 24 18:20:39 compute-0 systemd[1]: libpod-57a4ec467aec38b579ca04ca1b25e29b8c22f3f8f1d4ae950daea2e89aece526.scope: Deactivated successfully.
Nov 24 18:20:39 compute-0 podman[89598]: 2025-11-24 18:20:39.915331194 +0000 UTC m=+1.446928685 container died 57a4ec467aec38b579ca04ca1b25e29b8c22f3f8f1d4ae950daea2e89aece526 (image=quay.io/ceph/ceph:v18, name=adoring_jackson, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:20:39 compute-0 sudo[90073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:39 compute-0 sudo[90073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:39 compute-0 sudo[90073]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:39 compute-0 sudo[90106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d
Nov 24 18:20:39 compute-0 sudo[90106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-a65c08a2c6a8cddee7844bd1a3d314cfee7dcb63eca1f6886140b1c5a4331fb4-merged.mount: Deactivated successfully.
Nov 24 18:20:40 compute-0 ceph-mon[74927]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:40 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:40 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 18:20:40 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:40 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 24 18:20:40 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:20:40 compute-0 ceph-mon[74927]: Deploying daemon osd.2 on compute-0
Nov 24 18:20:40 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2656158072' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 24 18:20:40 compute-0 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x560b4073c000
Nov 24 18:20:40 compute-0 ceph-osd[89581]: rocksdb: DB pointer 0x560b414d7a00
Nov 24 18:20:40 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 24 18:20:40 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Nov 24 18:20:40 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Nov 24 18:20:40 compute-0 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 18:20:40 compute-0 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1.0 total, 1.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.365       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.365       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.365       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.36              0.00         1    0.365       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.0 total, 1.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.0 total, 1.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.0 total, 1.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.0 total, 1.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.0 total, 1.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.0 total, 1.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.0 total, 1.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.0 total, 1.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.0 total, 1.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.055       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.055       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.055       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.055       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.0 total, 1.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.10              0.00         1    0.095       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.10              0.00         1    0.095       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.10              0.00         1    0.095       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.10              0.00         1    0.095       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.0 total, 1.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.0 total, 1.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 18:20:40 compute-0 ceph-osd[89581]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 24 18:20:40 compute-0 ceph-osd[89581]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 24 18:20:40 compute-0 ceph-osd[89581]: _get_class not permitted to load lua
Nov 24 18:20:40 compute-0 ceph-osd[89581]: _get_class not permitted to load sdk
Nov 24 18:20:40 compute-0 ceph-osd[89581]: _get_class not permitted to load test_remote_reads
Nov 24 18:20:40 compute-0 ceph-osd[89581]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 24 18:20:40 compute-0 ceph-osd[89581]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 24 18:20:40 compute-0 ceph-osd[89581]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 24 18:20:40 compute-0 ceph-osd[89581]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 24 18:20:40 compute-0 ceph-osd[89581]: osd.1 0 load_pgs
Nov 24 18:20:40 compute-0 ceph-osd[89581]: osd.1 0 load_pgs opened 0 pgs
Nov 24 18:20:40 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1[89552]: 2025-11-24T18:20:40.406+0000 7f4b025f9740 -1 osd.1 0 log_to_monitors true
Nov 24 18:20:40 compute-0 ceph-osd[89581]: osd.1 0 log_to_monitors true
Nov 24 18:20:40 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Nov 24 18:20:40 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1623794735,v1:192.168.122.100:6807/1623794735]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 24 18:20:40 compute-0 podman[89598]: 2025-11-24 18:20:40.429092969 +0000 UTC m=+1.960690460 container remove 57a4ec467aec38b579ca04ca1b25e29b8c22f3f8f1d4ae950daea2e89aece526 (image=quay.io/ceph/ceph:v18, name=adoring_jackson, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:20:40 compute-0 systemd[1]: libpod-conmon-57a4ec467aec38b579ca04ca1b25e29b8c22f3f8f1d4ae950daea2e89aece526.scope: Deactivated successfully.
Nov 24 18:20:40 compute-0 sudo[89578]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:40 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:40 compute-0 ceph-mgr[75218]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3004045453; not ready for session (expect reconnect)
Nov 24 18:20:40 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 18:20:40 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 18:20:40 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 18:20:40 compute-0 podman[90212]: 2025-11-24 18:20:40.551803939 +0000 UTC m=+0.021887834 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:20:40 compute-0 podman[90212]: 2025-11-24 18:20:40.675572634 +0000 UTC m=+0.145656529 container create 9daea98b02596dbf5817f2b66743d62ab8d4117df2fea0a60da9b1614ae4bd23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_robinson, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:20:40 compute-0 systemd[1]: Started libpod-conmon-9daea98b02596dbf5817f2b66743d62ab8d4117df2fea0a60da9b1614ae4bd23.scope.
Nov 24 18:20:40 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:20:40 compute-0 podman[90212]: 2025-11-24 18:20:40.802151461 +0000 UTC m=+0.272235366 container init 9daea98b02596dbf5817f2b66743d62ab8d4117df2fea0a60da9b1614ae4bd23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:20:40 compute-0 podman[90212]: 2025-11-24 18:20:40.807926907 +0000 UTC m=+0.278010792 container start 9daea98b02596dbf5817f2b66743d62ab8d4117df2fea0a60da9b1614ae4bd23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_robinson, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 18:20:40 compute-0 nervous_robinson[90229]: 167 167
Nov 24 18:20:40 compute-0 systemd[1]: libpod-9daea98b02596dbf5817f2b66743d62ab8d4117df2fea0a60da9b1614ae4bd23.scope: Deactivated successfully.
Nov 24 18:20:40 compute-0 podman[90212]: 2025-11-24 18:20:40.828317692 +0000 UTC m=+0.298401597 container attach 9daea98b02596dbf5817f2b66743d62ab8d4117df2fea0a60da9b1614ae4bd23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_robinson, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 24 18:20:40 compute-0 podman[90212]: 2025-11-24 18:20:40.828751383 +0000 UTC m=+0.298835268 container died 9daea98b02596dbf5817f2b66743d62ab8d4117df2fea0a60da9b1614ae4bd23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 24 18:20:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-180ff14d8594bd53788712a3cbe6db9c9713cc8d3ad0406e2595591fb8286187-merged.mount: Deactivated successfully.
Nov 24 18:20:40 compute-0 podman[90212]: 2025-11-24 18:20:40.931146209 +0000 UTC m=+0.401230134 container remove 9daea98b02596dbf5817f2b66743d62ab8d4117df2fea0a60da9b1614ae4bd23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 24 18:20:40 compute-0 systemd[1]: libpod-conmon-9daea98b02596dbf5817f2b66743d62ab8d4117df2fea0a60da9b1614ae4bd23.scope: Deactivated successfully.
Nov 24 18:20:41 compute-0 podman[90261]: 2025-11-24 18:20:41.232286465 +0000 UTC m=+0.068049500 container create 82a9c183b3f62395376c91cb8bcfd9c9da9a63532b2fe4f4f18edb8c7afdd6af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:20:41 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Nov 24 18:20:41 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 18:20:41 compute-0 podman[90261]: 2025-11-24 18:20:41.196563763 +0000 UTC m=+0.032326898 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:20:41 compute-0 ceph-mon[74927]: from='osd.1 [v2:192.168.122.100:6806/1623794735,v1:192.168.122.100:6807/1623794735]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 24 18:20:41 compute-0 ceph-mon[74927]: pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:41 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 18:20:41 compute-0 systemd[1]: Started libpod-conmon-82a9c183b3f62395376c91cb8bcfd9c9da9a63532b2fe4f4f18edb8c7afdd6af.scope.
Nov 24 18:20:41 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:20:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93211d7b969363c0b9a6df6d83ba6ff6ded9c3f4c2cbd87cc521a9a5adb61ad3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:41 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1623794735,v1:192.168.122.100:6807/1623794735]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 24 18:20:41 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e9 e9: 3 total, 0 up, 3 in
Nov 24 18:20:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93211d7b969363c0b9a6df6d83ba6ff6ded9c3f4c2cbd87cc521a9a5adb61ad3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93211d7b969363c0b9a6df6d83ba6ff6ded9c3f4c2cbd87cc521a9a5adb61ad3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93211d7b969363c0b9a6df6d83ba6ff6ded9c3f4c2cbd87cc521a9a5adb61ad3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93211d7b969363c0b9a6df6d83ba6ff6ded9c3f4c2cbd87cc521a9a5adb61ad3/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:41 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 0 up, 3 in
Nov 24 18:20:41 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 18:20:41 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 18:20:41 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 24 18:20:41 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1623794735,v1:192.168.122.100:6807/1623794735]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 24 18:20:41 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 24 18:20:41 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 18:20:41 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 18:20:41 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 18:20:41 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 18:20:41 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 18:20:41 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 18:20:41 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 18:20:41 compute-0 podman[90261]: 2025-11-24 18:20:41.355256401 +0000 UTC m=+0.191019446 container init 82a9c183b3f62395376c91cb8bcfd9c9da9a63532b2fe4f4f18edb8c7afdd6af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate-test, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:20:41 compute-0 podman[90261]: 2025-11-24 18:20:41.363054128 +0000 UTC m=+0.198817173 container start 82a9c183b3f62395376c91cb8bcfd9c9da9a63532b2fe4f4f18edb8c7afdd6af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 18:20:41 compute-0 podman[90261]: 2025-11-24 18:20:41.374661411 +0000 UTC m=+0.210424446 container attach 82a9c183b3f62395376c91cb8bcfd9c9da9a63532b2fe4f4f18edb8c7afdd6af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 18:20:41 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 24 18:20:41 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 24 18:20:41 compute-0 ceph-mgr[75218]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3004045453; not ready for session (expect reconnect)
Nov 24 18:20:41 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 18:20:41 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 18:20:41 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 18:20:42 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate-test[90276]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 24 18:20:42 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate-test[90276]:                             [--no-systemd] [--no-tmpfs]
Nov 24 18:20:42 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate-test[90276]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 24 18:20:42 compute-0 systemd[1]: libpod-82a9c183b3f62395376c91cb8bcfd9c9da9a63532b2fe4f4f18edb8c7afdd6af.scope: Deactivated successfully.
Nov 24 18:20:42 compute-0 podman[90261]: 2025-11-24 18:20:42.055871954 +0000 UTC m=+0.891635029 container died 82a9c183b3f62395376c91cb8bcfd9c9da9a63532b2fe4f4f18edb8c7afdd6af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:20:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-93211d7b969363c0b9a6df6d83ba6ff6ded9c3f4c2cbd87cc521a9a5adb61ad3-merged.mount: Deactivated successfully.
Nov 24 18:20:42 compute-0 podman[90261]: 2025-11-24 18:20:42.137518576 +0000 UTC m=+0.973281611 container remove 82a9c183b3f62395376c91cb8bcfd9c9da9a63532b2fe4f4f18edb8c7afdd6af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 18:20:42 compute-0 systemd[1]: libpod-conmon-82a9c183b3f62395376c91cb8bcfd9c9da9a63532b2fe4f4f18edb8c7afdd6af.scope: Deactivated successfully.
Nov 24 18:20:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Nov 24 18:20:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 18:20:42 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1623794735,v1:192.168.122.100:6807/1623794735]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 24 18:20:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e10 e10: 3 total, 0 up, 3 in
Nov 24 18:20:42 compute-0 ceph-osd[89581]: osd.1 0 done with init, starting boot process
Nov 24 18:20:42 compute-0 ceph-osd[89581]: osd.1 0 start_boot
Nov 24 18:20:42 compute-0 ceph-osd[89581]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 24 18:20:42 compute-0 ceph-osd[89581]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 24 18:20:42 compute-0 ceph-osd[89581]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 24 18:20:42 compute-0 ceph-osd[89581]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 24 18:20:42 compute-0 ceph-osd[89581]: osd.1 0  bench count 12288000 bsize 4 KiB
Nov 24 18:20:42 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 0 up, 3 in
Nov 24 18:20:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 18:20:42 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 18:20:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 18:20:42 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 18:20:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 18:20:42 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 18:20:42 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 18:20:42 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 18:20:42 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 18:20:42 compute-0 ceph-mon[74927]: from='osd.1 [v2:192.168.122.100:6806/1623794735,v1:192.168.122.100:6807/1623794735]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 24 18:20:42 compute-0 ceph-mon[74927]: osdmap e9: 3 total, 0 up, 3 in
Nov 24 18:20:42 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 18:20:42 compute-0 ceph-mon[74927]: from='osd.1 [v2:192.168.122.100:6806/1623794735,v1:192.168.122.100:6807/1623794735]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 24 18:20:42 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 18:20:42 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 18:20:42 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 18:20:42 compute-0 ceph-mgr[75218]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1623794735; not ready for session (expect reconnect)
Nov 24 18:20:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 18:20:42 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 18:20:42 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 18:20:42 compute-0 systemd[1]: Reloading.
Nov 24 18:20:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e10 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:20:42 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:42 compute-0 systemd-rc-local-generator[90335]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:20:42 compute-0 systemd-sysv-generator[90340]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:20:42 compute-0 ceph-mgr[75218]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3004045453; not ready for session (expect reconnect)
Nov 24 18:20:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 18:20:42 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 18:20:42 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 18:20:42 compute-0 ceph-osd[88544]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 22.320 iops: 5713.855 elapsed_sec: 0.525
Nov 24 18:20:42 compute-0 ceph-osd[88544]: log_channel(cluster) log [WRN] : OSD bench result of 5713.854944 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 24 18:20:42 compute-0 ceph-osd[88544]: osd.0 0 waiting for initial osdmap
Nov 24 18:20:42 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0[88540]: 2025-11-24T18:20:42.693+0000 7fc52ab74640 -1 osd.0 0 waiting for initial osdmap
Nov 24 18:20:42 compute-0 ceph-osd[88544]: osd.0 10 crush map has features 288514050185494528, adjusting msgr requires for clients
Nov 24 18:20:42 compute-0 ceph-osd[88544]: osd.0 10 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Nov 24 18:20:42 compute-0 ceph-osd[88544]: osd.0 10 crush map has features 3314932999778484224, adjusting msgr requires for osds
Nov 24 18:20:42 compute-0 ceph-osd[88544]: osd.0 10 check_osdmap_features require_osd_release unknown -> reef
Nov 24 18:20:42 compute-0 ceph-osd[88544]: osd.0 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 24 18:20:42 compute-0 ceph-osd[88544]: osd.0 10 set_numa_affinity not setting numa affinity
Nov 24 18:20:42 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0[88540]: 2025-11-24T18:20:42.730+0000 7fc52619c640 -1 osd.0 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 24 18:20:42 compute-0 ceph-osd[88544]: osd.0 10 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Nov 24 18:20:42 compute-0 systemd[1]: Reloading.
Nov 24 18:20:42 compute-0 systemd-rc-local-generator[90378]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:20:42 compute-0 systemd-sysv-generator[90382]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:20:43 compute-0 systemd[1]: Starting Ceph osd.2 for e5ee928f-099b-569b-93c9-ecf025cbb50d...
Nov 24 18:20:43 compute-0 podman[90438]: 2025-11-24 18:20:43.301894704 +0000 UTC m=+0.052324243 container create 9077f4fb005d5e613a94a4facab79446737f61280eb2394ea1d5afe9fc1e924a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 18:20:43 compute-0 ceph-mgr[75218]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1623794735; not ready for session (expect reconnect)
Nov 24 18:20:43 compute-0 podman[90438]: 2025-11-24 18:20:43.270126591 +0000 UTC m=+0.020556180 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:20:43 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Nov 24 18:20:43 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 18:20:43 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 18:20:43 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 18:20:43 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 18:20:43 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:20:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3978808a570ed4f80861a1985084b6e1ae4a8e02a9119bcb3fd3f356753bf3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3978808a570ed4f80861a1985084b6e1ae4a8e02a9119bcb3fd3f356753bf3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3978808a570ed4f80861a1985084b6e1ae4a8e02a9119bcb3fd3f356753bf3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3978808a570ed4f80861a1985084b6e1ae4a8e02a9119bcb3fd3f356753bf3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3978808a570ed4f80861a1985084b6e1ae4a8e02a9119bcb3fd3f356753bf3/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:43 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Nov 24 18:20:43 compute-0 podman[90438]: 2025-11-24 18:20:43.460497859 +0000 UTC m=+0.210927398 container init 9077f4fb005d5e613a94a4facab79446737f61280eb2394ea1d5afe9fc1e924a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 18:20:43 compute-0 ceph-mon[74927]: from='osd.1 [v2:192.168.122.100:6806/1623794735,v1:192.168.122.100:6807/1623794735]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 24 18:20:43 compute-0 ceph-mon[74927]: osdmap e10: 3 total, 0 up, 3 in
Nov 24 18:20:43 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 18:20:43 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 18:20:43 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 18:20:43 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 18:20:43 compute-0 ceph-mon[74927]: pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 18:20:43 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 18:20:43 compute-0 ceph-mon[74927]: OSD bench result of 5713.854944 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 24 18:20:43 compute-0 podman[90438]: 2025-11-24 18:20:43.467116937 +0000 UTC m=+0.217546466 container start 9077f4fb005d5e613a94a4facab79446737f61280eb2394ea1d5afe9fc1e924a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:20:43 compute-0 ceph-osd[88544]: osd.0 11 state: booting -> active
Nov 24 18:20:43 compute-0 ceph-mon[74927]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/3004045453,v1:192.168.122.100:6803/3004045453] boot
Nov 24 18:20:43 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Nov 24 18:20:43 compute-0 podman[90438]: 2025-11-24 18:20:43.508450521 +0000 UTC m=+0.258880090 container attach 9077f4fb005d5e613a94a4facab79446737f61280eb2394ea1d5afe9fc1e924a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 18:20:43 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 18:20:43 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 18:20:43 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 18:20:43 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 18:20:43 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 18:20:43 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 18:20:43 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 18:20:43 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 18:20:44 compute-0 ceph-mgr[75218]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1623794735; not ready for session (expect reconnect)
Nov 24 18:20:44 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 18:20:44 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 18:20:44 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 18:20:44 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 24 18:20:44 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate[90453]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 24 18:20:44 compute-0 bash[90438]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 24 18:20:44 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate[90453]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Nov 24 18:20:44 compute-0 bash[90438]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Nov 24 18:20:44 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate[90453]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Nov 24 18:20:44 compute-0 bash[90438]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Nov 24 18:20:44 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate[90453]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 24 18:20:44 compute-0 bash[90438]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 24 18:20:44 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate[90453]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 24 18:20:44 compute-0 bash[90438]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 24 18:20:44 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate[90453]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 24 18:20:44 compute-0 bash[90438]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 24 18:20:44 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate[90453]: --> ceph-volume raw activate successful for osd ID: 2
Nov 24 18:20:44 compute-0 bash[90438]: --> ceph-volume raw activate successful for osd ID: 2
Nov 24 18:20:44 compute-0 systemd[1]: libpod-9077f4fb005d5e613a94a4facab79446737f61280eb2394ea1d5afe9fc1e924a.scope: Deactivated successfully.
Nov 24 18:20:44 compute-0 podman[90438]: 2025-11-24 18:20:44.566094152 +0000 UTC m=+1.316523681 container died 9077f4fb005d5e613a94a4facab79446737f61280eb2394ea1d5afe9fc1e924a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:20:44 compute-0 systemd[1]: libpod-9077f4fb005d5e613a94a4facab79446737f61280eb2394ea1d5afe9fc1e924a.scope: Consumed 1.110s CPU time.
Nov 24 18:20:44 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Nov 24 18:20:44 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e11 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 18:20:44 compute-0 ceph-mon[74927]: purged_snaps scrub starts
Nov 24 18:20:44 compute-0 ceph-mon[74927]: purged_snaps scrub ok
Nov 24 18:20:44 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 18:20:44 compute-0 ceph-mon[74927]: osd.0 [v2:192.168.122.100:6802/3004045453,v1:192.168.122.100:6803/3004045453] boot
Nov 24 18:20:44 compute-0 ceph-mon[74927]: osdmap e11: 3 total, 1 up, 3 in
Nov 24 18:20:44 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 18:20:44 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 18:20:44 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 18:20:44 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 18:20:44 compute-0 ceph-mgr[75218]: [devicehealth INFO root] creating mgr pool
Nov 24 18:20:45 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Nov 24 18:20:45 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 24 18:20:45 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Nov 24 18:20:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf3978808a570ed4f80861a1985084b6e1ae4a8e02a9119bcb3fd3f356753bf3-merged.mount: Deactivated successfully.
Nov 24 18:20:45 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Nov 24 18:20:45 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 18:20:45 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 18:20:45 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 18:20:45 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 18:20:45 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 18:20:45 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 18:20:45 compute-0 ceph-mgr[75218]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1623794735; not ready for session (expect reconnect)
Nov 24 18:20:45 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 18:20:45 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 18:20:45 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 18:20:45 compute-0 podman[90438]: 2025-11-24 18:20:45.579868915 +0000 UTC m=+2.330298444 container remove 9077f4fb005d5e613a94a4facab79446737f61280eb2394ea1d5afe9fc1e924a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:20:45 compute-0 ceph-mon[74927]: pgmap v37: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 24 18:20:45 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 24 18:20:45 compute-0 ceph-mon[74927]: osdmap e12: 3 total, 1 up, 3 in
Nov 24 18:20:45 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 18:20:45 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 18:20:45 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 18:20:45 compute-0 podman[90636]: 2025-11-24 18:20:45.783353084 +0000 UTC m=+0.021266688 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:20:46 compute-0 podman[90636]: 2025-11-24 18:20:46.066404012 +0000 UTC m=+0.304317586 container create d4b4bd73407edf8b64315195325242832b213f100b6f7c4e8a80bbdc340ec673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 24 18:20:46 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Nov 24 18:20:46 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e12 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 18:20:46 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 24 18:20:46 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e13 e13: 3 total, 1 up, 3 in
Nov 24 18:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63cecfee427ea3d2bf3c04dc6c2b5a0a565c00ff072d537fb33dfb3b5565254c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63cecfee427ea3d2bf3c04dc6c2b5a0a565c00ff072d537fb33dfb3b5565254c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63cecfee427ea3d2bf3c04dc6c2b5a0a565c00ff072d537fb33dfb3b5565254c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63cecfee427ea3d2bf3c04dc6c2b5a0a565c00ff072d537fb33dfb3b5565254c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63cecfee427ea3d2bf3c04dc6c2b5a0a565c00ff072d537fb33dfb3b5565254c/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:46 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e13 crush map has features 3314933000852226048, adjusting msgr requires
Nov 24 18:20:46 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e13 crush map has features 288514051259236352, adjusting msgr requires
Nov 24 18:20:46 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e13 crush map has features 288514051259236352, adjusting msgr requires
Nov 24 18:20:46 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e13 crush map has features 288514051259236352, adjusting msgr requires
Nov 24 18:20:46 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 1 up, 3 in
Nov 24 18:20:46 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 18:20:46 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 18:20:46 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 18:20:46 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 18:20:46 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 18:20:46 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 18:20:46 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Nov 24 18:20:46 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 24 18:20:46 compute-0 podman[90636]: 2025-11-24 18:20:46.323402204 +0000 UTC m=+0.561315798 container init d4b4bd73407edf8b64315195325242832b213f100b6f7c4e8a80bbdc340ec673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Nov 24 18:20:46 compute-0 podman[90636]: 2025-11-24 18:20:46.328870222 +0000 UTC m=+0.566783816 container start d4b4bd73407edf8b64315195325242832b213f100b6f7c4e8a80bbdc340ec673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Nov 24 18:20:46 compute-0 ceph-osd[90655]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 18:20:46 compute-0 ceph-osd[90655]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 24 18:20:46 compute-0 ceph-osd[90655]: pidfile_write: ignore empty --pid-file
Nov 24 18:20:46 compute-0 ceph-osd[90655]: bdev(0x55685d8b7800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 24 18:20:46 compute-0 ceph-osd[90655]: bdev(0x55685d8b7800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 24 18:20:46 compute-0 ceph-osd[90655]: bdev(0x55685d8b7800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 18:20:46 compute-0 ceph-osd[90655]: bdev(0x55685d8b7800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 18:20:46 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 18:20:46 compute-0 ceph-osd[90655]: bdev(0x55685e6ef800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 24 18:20:46 compute-0 ceph-osd[90655]: bdev(0x55685e6ef800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 24 18:20:46 compute-0 ceph-osd[90655]: bdev(0x55685e6ef800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 18:20:46 compute-0 ceph-mgr[75218]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1623794735; not ready for session (expect reconnect)
Nov 24 18:20:46 compute-0 ceph-osd[90655]: bdev(0x55685e6ef800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 18:20:46 compute-0 ceph-osd[90655]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 24 18:20:46 compute-0 ceph-osd[90655]: bdev(0x55685e6ef800 /var/lib/ceph/osd/ceph-2/block) close
Nov 24 18:20:46 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 18:20:46 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 18:20:46 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 18:20:46 compute-0 ceph-osd[88544]: osd.0 13 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 24 18:20:46 compute-0 ceph-osd[88544]: osd.0 13 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Nov 24 18:20:46 compute-0 ceph-osd[88544]: osd.0 13 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 24 18:20:46 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v40: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 24 18:20:46 compute-0 ceph-osd[90655]: bdev(0x55685d8b7800 /var/lib/ceph/osd/ceph-2/block) close
Nov 24 18:20:46 compute-0 bash[90636]: d4b4bd73407edf8b64315195325242832b213f100b6f7c4e8a80bbdc340ec673
Nov 24 18:20:46 compute-0 systemd[1]: Started Ceph osd.2 for e5ee928f-099b-569b-93c9-ecf025cbb50d.
Nov 24 18:20:46 compute-0 sudo[90106]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:46 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:20:46 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:46 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:20:46 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:46 compute-0 sudo[90670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:46 compute-0 sudo[90670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:46 compute-0 sudo[90670]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:46 compute-0 ceph-osd[90655]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Nov 24 18:20:46 compute-0 ceph-osd[90655]: load: jerasure load: lrc 
Nov 24 18:20:46 compute-0 ceph-osd[90655]: bdev(0x55685d926c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 24 18:20:46 compute-0 ceph-osd[90655]: bdev(0x55685d926c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 24 18:20:46 compute-0 ceph-osd[90655]: bdev(0x55685d926c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 18:20:46 compute-0 ceph-osd[90655]: bdev(0x55685d926c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 18:20:46 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 18:20:46 compute-0 ceph-osd[90655]: bdev(0x55685d926c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 24 18:20:46 compute-0 sudo[90695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:20:46 compute-0 sudo[90695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:46 compute-0 sudo[90695]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:46 compute-0 sudo[90725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:46 compute-0 sudo[90725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:46 compute-0 sudo[90725]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:47 compute-0 sudo[90750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- raw list --format json
Nov 24 18:20:47 compute-0 sudo[90750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Nov 24 18:20:47 compute-0 ceph-mon[74927]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 18:20:47 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 24 18:20:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e14 e14: 3 total, 1 up, 3 in
Nov 24 18:20:47 compute-0 ceph-osd[90655]: bdev(0x55685d926c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 24 18:20:47 compute-0 ceph-osd[90655]: bdev(0x55685d926c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 24 18:20:47 compute-0 ceph-osd[90655]: bdev(0x55685d926c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 18:20:47 compute-0 ceph-osd[90655]: bdev(0x55685d926c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 18:20:47 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 18:20:47 compute-0 ceph-osd[90655]: bdev(0x55685d926c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 24 18:20:47 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 1 up, 3 in
Nov 24 18:20:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 18:20:47 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 18:20:47 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 18:20:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 18:20:47 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 18:20:47 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 18:20:47 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 24 18:20:47 compute-0 ceph-mon[74927]: osdmap e13: 3 total, 1 up, 3 in
Nov 24 18:20:47 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 18:20:47 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 18:20:47 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 24 18:20:47 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 18:20:47 compute-0 ceph-mon[74927]: pgmap v40: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 24 18:20:47 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:47 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:47 compute-0 ceph-mgr[75218]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1623794735; not ready for session (expect reconnect)
Nov 24 18:20:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 18:20:47 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 18:20:47 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 18:20:47 compute-0 podman[90816]: 2025-11-24 18:20:47.389388836 +0000 UTC m=+0.086734101 container create 565bc92e0f24fe0aa540a07cad32aef02936870adc6d74c9b1fcdf39bb3e389f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 24 18:20:47 compute-0 podman[90816]: 2025-11-24 18:20:47.323720078 +0000 UTC m=+0.021065353 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 24 18:20:47 compute-0 ceph-osd[90655]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 24 18:20:47 compute-0 ceph-osd[90655]: bdev(0x55685d926c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 24 18:20:47 compute-0 ceph-osd[90655]: bdev(0x55685d926c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 24 18:20:47 compute-0 ceph-osd[90655]: bdev(0x55685d926c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 18:20:47 compute-0 ceph-osd[90655]: bdev(0x55685d926c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 18:20:47 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 18:20:47 compute-0 ceph-osd[90655]: bdev(0x55685d927400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 24 18:20:47 compute-0 ceph-osd[90655]: bdev(0x55685d927400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 24 18:20:47 compute-0 ceph-osd[90655]: bdev(0x55685d927400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 18:20:47 compute-0 ceph-osd[90655]: bdev(0x55685d927400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 18:20:47 compute-0 ceph-osd[90655]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 24 18:20:47 compute-0 ceph-osd[90655]: bluefs mount
Nov 24 18:20:47 compute-0 ceph-osd[90655]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: bluefs mount shared_bdev_used = 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: RocksDB version: 7.9.2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Git sha 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: DB SUMMARY
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: DB Session ID:  J55JOOGKCSODWZHIF7GR
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: CURRENT file:  CURRENT
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: IDENTITY file:  IDENTITY
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                         Options.error_if_exists: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                       Options.create_if_missing: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                         Options.paranoid_checks: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                                     Options.env: 0x55685e741d50
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                                Options.info_log: 0x55685d942ba0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.max_file_opening_threads: 16
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                              Options.statistics: (nil)
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                               Options.use_fsync: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                       Options.max_log_file_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                         Options.allow_fallocate: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.use_direct_reads: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.create_missing_column_families: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                              Options.db_log_dir: 
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                                 Options.wal_dir: db.wal
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.advise_random_on_open: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.write_buffer_manager: 0x55685e84c460
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                            Options.rate_limiter: (nil)
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.unordered_write: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                               Options.row_cache: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                              Options.wal_filter: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.allow_ingest_behind: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.two_write_queues: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.manual_wal_flush: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.wal_compression: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.atomic_flush: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                 Options.log_readahead_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.allow_data_in_errors: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.db_host_id: __hostname__
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.max_background_jobs: 4
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.max_background_compactions: -1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.max_subcompactions: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.max_open_files: -1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.bytes_per_sync: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.max_background_flushes: -1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Compression algorithms supported:
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         kZSTD supported: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         kXpressCompression supported: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         kBZip2Compression supported: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         kLZ4Compression supported: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         kZlibCompression supported: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         kLZ4HCCompression supported: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         kSnappyCompression supported: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55685d943220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55685d92add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55685d943220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55685d92add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55685d943220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55685d92add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55685d943220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55685d92add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55685d943220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55685d92add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55685d943220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55685d92add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55685d943220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55685d92add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55685d943200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55685d92a430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55685d943200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55685d92a430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55685d943200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55685d92a430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 17641e21-a0a3-419d-be68-bf5701bf60bf
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008447465799, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008447466191, "job": 1, "event": "recovery_finished"}
Nov 24 18:20:47 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Nov 24 18:20:47 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Nov 24 18:20:47 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 24 18:20:47 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: freelist init
Nov 24 18:20:47 compute-0 ceph-osd[90655]: freelist _read_cfg
Nov 24 18:20:47 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 24 18:20:47 compute-0 ceph-osd[90655]: bluefs umount
Nov 24 18:20:47 compute-0 ceph-osd[90655]: bdev(0x55685d927400 /var/lib/ceph/osd/ceph-2/block) close
Nov 24 18:20:47 compute-0 systemd[1]: Started libpod-conmon-565bc92e0f24fe0aa540a07cad32aef02936870adc6d74c9b1fcdf39bb3e389f.scope.
Nov 24 18:20:47 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:20:47 compute-0 podman[90816]: 2025-11-24 18:20:47.554959098 +0000 UTC m=+0.252304373 container init 565bc92e0f24fe0aa540a07cad32aef02936870adc6d74c9b1fcdf39bb3e389f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hodgkin, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:20:47 compute-0 podman[90816]: 2025-11-24 18:20:47.577007365 +0000 UTC m=+0.274352620 container start 565bc92e0f24fe0aa540a07cad32aef02936870adc6d74c9b1fcdf39bb3e389f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:20:47 compute-0 kind_hodgkin[91026]: 167 167
Nov 24 18:20:47 compute-0 systemd[1]: libpod-565bc92e0f24fe0aa540a07cad32aef02936870adc6d74c9b1fcdf39bb3e389f.scope: Deactivated successfully.
Nov 24 18:20:47 compute-0 conmon[91026]: conmon 565bc92e0f24fe0aa540 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-565bc92e0f24fe0aa540a07cad32aef02936870adc6d74c9b1fcdf39bb3e389f.scope/container/memory.events
Nov 24 18:20:47 compute-0 podman[90816]: 2025-11-24 18:20:47.608620133 +0000 UTC m=+0.305965408 container attach 565bc92e0f24fe0aa540a07cad32aef02936870adc6d74c9b1fcdf39bb3e389f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hodgkin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 24 18:20:47 compute-0 podman[90816]: 2025-11-24 18:20:47.610025348 +0000 UTC m=+0.307370643 container died 565bc92e0f24fe0aa540a07cad32aef02936870adc6d74c9b1fcdf39bb3e389f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:20:47 compute-0 ceph-osd[90655]: bdev(0x55685d927400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 24 18:20:47 compute-0 ceph-osd[90655]: bdev(0x55685d927400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 24 18:20:47 compute-0 ceph-osd[90655]: bdev(0x55685d927400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 18:20:47 compute-0 ceph-osd[90655]: bdev(0x55685d927400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 18:20:47 compute-0 ceph-osd[90655]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 24 18:20:47 compute-0 ceph-osd[90655]: bluefs mount
Nov 24 18:20:47 compute-0 ceph-osd[90655]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: bluefs mount shared_bdev_used = 4718592
Nov 24 18:20:47 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: RocksDB version: 7.9.2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Git sha 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: DB SUMMARY
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: DB Session ID:  J55JOOGKCSODWZHIF7GQ
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: CURRENT file:  CURRENT
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: IDENTITY file:  IDENTITY
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                         Options.error_if_exists: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                       Options.create_if_missing: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                         Options.paranoid_checks: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                                     Options.env: 0x55685e902b60
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                                Options.info_log: 0x55685d942900
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.max_file_opening_threads: 16
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                              Options.statistics: (nil)
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                               Options.use_fsync: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                       Options.max_log_file_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                         Options.allow_fallocate: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.use_direct_reads: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.create_missing_column_families: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                              Options.db_log_dir: 
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                                 Options.wal_dir: db.wal
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.advise_random_on_open: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.write_buffer_manager: 0x55685e84ca00
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                            Options.rate_limiter: (nil)
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.unordered_write: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                               Options.row_cache: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                              Options.wal_filter: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.allow_ingest_behind: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.two_write_queues: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.manual_wal_flush: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.wal_compression: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.atomic_flush: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                 Options.log_readahead_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.allow_data_in_errors: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.db_host_id: __hostname__
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.max_background_jobs: 4
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.max_background_compactions: -1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.max_subcompactions: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.max_open_files: -1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.bytes_per_sync: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.max_background_flushes: -1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Compression algorithms supported:
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         kZSTD supported: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         kXpressCompression supported: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         kBZip2Compression supported: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         kLZ4Compression supported: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         kZlibCompression supported: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         kLZ4HCCompression supported: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         kSnappyCompression supported: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55685d942d60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55685d92add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55685d942d60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55685d92add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55685d942d60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55685d92add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55685d942d60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55685d92add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55685d942d60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55685d92add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55685d942d60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55685d92add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55685d942d60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55685d92add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55685d943320)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55685d92a430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55685d943320)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55685d92a430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:           Options.merge_operator: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.compaction_filter: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55685d943320)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55685d92a430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.compression: LZ4
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.num_levels: 7
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                           Options.bloom_locality: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                               Options.ttl: 2592000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                       Options.enable_blob_files: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                           Options.min_blob_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 17641e21-a0a3-419d-be68-bf5701bf60bf
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008447724139, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008447759872, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008447, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "17641e21-a0a3-419d-be68-bf5701bf60bf", "db_session_id": "J55JOOGKCSODWZHIF7GQ", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 24 18:20:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-df023e9f9c3fd2b6f028f43d30a034f34d8d777c5d220a05cb0c774d3cc96883-merged.mount: Deactivated successfully.
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008447786973, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008447, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "17641e21-a0a3-419d-be68-bf5701bf60bf", "db_session_id": "J55JOOGKCSODWZHIF7GQ", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008447791195, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008447, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "17641e21-a0a3-419d-be68-bf5701bf60bf", "db_session_id": "J55JOOGKCSODWZHIF7GQ", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008447810109, "job": 1, "event": "recovery_finished"}
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 24 18:20:47 compute-0 podman[90816]: 2025-11-24 18:20:47.818034672 +0000 UTC m=+0.515379927 container remove 565bc92e0f24fe0aa540a07cad32aef02936870adc6d74c9b1fcdf39bb3e389f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Nov 24 18:20:47 compute-0 systemd[1]: libpod-conmon-565bc92e0f24fe0aa540a07cad32aef02936870adc6d74c9b1fcdf39bb3e389f.scope: Deactivated successfully.
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55685e932000
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: DB pointer 0x55685d965a00
Nov 24 18:20:47 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 24 18:20:47 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Nov 24 18:20:47 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 18:20:47 compute-0 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.027       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.03              0.00         1    0.027       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92a430#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92a430#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92a430#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.019       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.019       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 18:20:47 compute-0 ceph-osd[90655]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 24 18:20:47 compute-0 ceph-osd[90655]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 24 18:20:47 compute-0 ceph-osd[90655]: _get_class not permitted to load lua
Nov 24 18:20:47 compute-0 ceph-osd[90655]: _get_class not permitted to load sdk
Nov 24 18:20:47 compute-0 ceph-osd[90655]: _get_class not permitted to load test_remote_reads
Nov 24 18:20:47 compute-0 ceph-osd[90655]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 24 18:20:47 compute-0 ceph-osd[90655]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 24 18:20:47 compute-0 ceph-osd[90655]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 24 18:20:47 compute-0 ceph-osd[90655]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 24 18:20:47 compute-0 ceph-osd[90655]: osd.2 0 load_pgs
Nov 24 18:20:47 compute-0 ceph-osd[90655]: osd.2 0 load_pgs opened 0 pgs
Nov 24 18:20:47 compute-0 ceph-osd[90655]: osd.2 0 log_to_monitors true
Nov 24 18:20:47 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2[90651]: 2025-11-24T18:20:47.927+0000 7f3fbfecf740 -1 osd.2 0 log_to_monitors true
Nov 24 18:20:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Nov 24 18:20:47 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2111577097,v1:192.168.122.100:6811/2111577097]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 24 18:20:47 compute-0 podman[91232]: 2025-11-24 18:20:47.971862077 +0000 UTC m=+0.048355262 container create e02237377b479f620bd5b427969d53b72168db5649c9e8fa419b561cc100f3d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:20:48 compute-0 systemd[1]: Started libpod-conmon-e02237377b479f620bd5b427969d53b72168db5649c9e8fa419b561cc100f3d0.scope.
Nov 24 18:20:48 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:20:48 compute-0 podman[91232]: 2025-11-24 18:20:47.954424587 +0000 UTC m=+0.030917802 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:20:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23563c8a49868f5b3a9440d59a58e33a101701a8eed0befc65088c165f380fcf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23563c8a49868f5b3a9440d59a58e33a101701a8eed0befc65088c165f380fcf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23563c8a49868f5b3a9440d59a58e33a101701a8eed0befc65088c165f380fcf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23563c8a49868f5b3a9440d59a58e33a101701a8eed0befc65088c165f380fcf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:48 compute-0 podman[91232]: 2025-11-24 18:20:48.070035026 +0000 UTC m=+0.146528231 container init e02237377b479f620bd5b427969d53b72168db5649c9e8fa419b561cc100f3d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:20:48 compute-0 podman[91232]: 2025-11-24 18:20:48.077790752 +0000 UTC m=+0.154283937 container start e02237377b479f620bd5b427969d53b72168db5649c9e8fa419b561cc100f3d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:20:48 compute-0 podman[91232]: 2025-11-24 18:20:48.083101266 +0000 UTC m=+0.159594451 container attach e02237377b479f620bd5b427969d53b72168db5649c9e8fa419b561cc100f3d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:20:48 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Nov 24 18:20:48 compute-0 ceph-mon[74927]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 18:20:48 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 24 18:20:48 compute-0 ceph-mon[74927]: osdmap e14: 3 total, 1 up, 3 in
Nov 24 18:20:48 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 18:20:48 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 18:20:48 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 18:20:48 compute-0 ceph-mon[74927]: from='osd.2 [v2:192.168.122.100:6810/2111577097,v1:192.168.122.100:6811/2111577097]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 24 18:20:48 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2111577097,v1:192.168.122.100:6811/2111577097]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 24 18:20:48 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e15 e15: 3 total, 1 up, 3 in
Nov 24 18:20:48 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 1 up, 3 in
Nov 24 18:20:48 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 24 18:20:48 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2111577097,v1:192.168.122.100:6811/2111577097]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 24 18:20:48 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e15 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 24 18:20:48 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 18:20:48 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 18:20:48 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 18:20:48 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 18:20:48 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 18:20:48 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 18:20:48 compute-0 ceph-osd[89581]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 40.387 iops: 10338.975 elapsed_sec: 0.290
Nov 24 18:20:48 compute-0 ceph-osd[89581]: log_channel(cluster) log [WRN] : OSD bench result of 10338.975085 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 24 18:20:48 compute-0 ceph-osd[89581]: osd.1 0 waiting for initial osdmap
Nov 24 18:20:48 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1[89552]: 2025-11-24T18:20:48.232+0000 7f4afe579640 -1 osd.1 0 waiting for initial osdmap
Nov 24 18:20:48 compute-0 ceph-osd[89581]: osd.1 15 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 24 18:20:48 compute-0 ceph-osd[89581]: osd.1 15 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 24 18:20:48 compute-0 ceph-osd[89581]: osd.1 15 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 24 18:20:48 compute-0 ceph-osd[89581]: osd.1 15 check_osdmap_features require_osd_release unknown -> reef
Nov 24 18:20:48 compute-0 ceph-osd[89581]: osd.1 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 24 18:20:48 compute-0 ceph-osd[89581]: osd.1 15 set_numa_affinity not setting numa affinity
Nov 24 18:20:48 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1[89552]: 2025-11-24T18:20:48.262+0000 7f4af9ba1640 -1 osd.1 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 24 18:20:48 compute-0 ceph-osd[89581]: osd.1 15 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Nov 24 18:20:48 compute-0 ceph-mgr[75218]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1623794735; not ready for session (expect reconnect)
Nov 24 18:20:48 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 18:20:48 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 18:20:48 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 18:20:48 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v43: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 24 18:20:48 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 24 18:20:48 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 24 18:20:49 compute-0 affectionate_rhodes[91282]: {
Nov 24 18:20:49 compute-0 affectionate_rhodes[91282]:     "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 18:20:49 compute-0 affectionate_rhodes[91282]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:20:49 compute-0 affectionate_rhodes[91282]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 18:20:49 compute-0 affectionate_rhodes[91282]:         "osd_id": 0,
Nov 24 18:20:49 compute-0 affectionate_rhodes[91282]:         "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:20:49 compute-0 affectionate_rhodes[91282]:         "type": "bluestore"
Nov 24 18:20:49 compute-0 affectionate_rhodes[91282]:     },
Nov 24 18:20:49 compute-0 affectionate_rhodes[91282]:     "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 18:20:49 compute-0 affectionate_rhodes[91282]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:20:49 compute-0 affectionate_rhodes[91282]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 18:20:49 compute-0 affectionate_rhodes[91282]:         "osd_id": 1,
Nov 24 18:20:49 compute-0 affectionate_rhodes[91282]:         "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:20:49 compute-0 affectionate_rhodes[91282]:         "type": "bluestore"
Nov 24 18:20:49 compute-0 affectionate_rhodes[91282]:     },
Nov 24 18:20:49 compute-0 affectionate_rhodes[91282]:     "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 18:20:49 compute-0 affectionate_rhodes[91282]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:20:49 compute-0 affectionate_rhodes[91282]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 18:20:49 compute-0 affectionate_rhodes[91282]:         "osd_id": 2,
Nov 24 18:20:49 compute-0 affectionate_rhodes[91282]:         "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:20:49 compute-0 affectionate_rhodes[91282]:         "type": "bluestore"
Nov 24 18:20:49 compute-0 affectionate_rhodes[91282]:     }
Nov 24 18:20:49 compute-0 affectionate_rhodes[91282]: }
Nov 24 18:20:49 compute-0 systemd[1]: libpod-e02237377b479f620bd5b427969d53b72168db5649c9e8fa419b561cc100f3d0.scope: Deactivated successfully.
Nov 24 18:20:49 compute-0 systemd[1]: libpod-e02237377b479f620bd5b427969d53b72168db5649c9e8fa419b561cc100f3d0.scope: Consumed 1.069s CPU time.
Nov 24 18:20:49 compute-0 podman[91316]: 2025-11-24 18:20:49.191218532 +0000 UTC m=+0.029427114 container died e02237377b479f620bd5b427969d53b72168db5649c9e8fa419b561cc100f3d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_rhodes, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 24 18:20:49 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Nov 24 18:20:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-23563c8a49868f5b3a9440d59a58e33a101701a8eed0befc65088c165f380fcf-merged.mount: Deactivated successfully.
Nov 24 18:20:49 compute-0 ceph-mon[74927]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 24 18:20:49 compute-0 ceph-mon[74927]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 24 18:20:49 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2111577097,v1:192.168.122.100:6811/2111577097]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 24 18:20:49 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Nov 24 18:20:49 compute-0 ceph-osd[90655]: osd.2 0 done with init, starting boot process
Nov 24 18:20:49 compute-0 ceph-osd[90655]: osd.2 0 start_boot
Nov 24 18:20:49 compute-0 ceph-osd[90655]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 24 18:20:49 compute-0 ceph-osd[90655]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 24 18:20:49 compute-0 ceph-osd[90655]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 24 18:20:49 compute-0 ceph-osd[90655]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 24 18:20:49 compute-0 ceph-osd[90655]: osd.2 0  bench count 12288000 bsize 4 KiB
Nov 24 18:20:49 compute-0 ceph-mon[74927]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/1623794735,v1:192.168.122.100:6807/1623794735] boot
Nov 24 18:20:49 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Nov 24 18:20:49 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 18:20:49 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 18:20:49 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 18:20:49 compute-0 ceph-osd[89581]: osd.1 16 state: booting -> active
Nov 24 18:20:49 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 18:20:49 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 16 pg[1.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=16) [1] r=0 lpr=16 pi=[13,16)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:20:49 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 18:20:49 compute-0 podman[91316]: 2025-11-24 18:20:49.252936641 +0000 UTC m=+0.091145193 container remove e02237377b479f620bd5b427969d53b72168db5649c9e8fa419b561cc100f3d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_rhodes, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 24 18:20:49 compute-0 ceph-mon[74927]: from='osd.2 [v2:192.168.122.100:6810/2111577097,v1:192.168.122.100:6811/2111577097]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 24 18:20:49 compute-0 ceph-mon[74927]: osdmap e15: 3 total, 1 up, 3 in
Nov 24 18:20:49 compute-0 ceph-mon[74927]: from='osd.2 [v2:192.168.122.100:6810/2111577097,v1:192.168.122.100:6811/2111577097]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 24 18:20:49 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 18:20:49 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 18:20:49 compute-0 ceph-mon[74927]: OSD bench result of 10338.975085 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 24 18:20:49 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 18:20:49 compute-0 ceph-mon[74927]: pgmap v43: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 24 18:20:49 compute-0 ceph-mgr[75218]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2111577097; not ready for session (expect reconnect)
Nov 24 18:20:49 compute-0 systemd[1]: libpod-conmon-e02237377b479f620bd5b427969d53b72168db5649c9e8fa419b561cc100f3d0.scope: Deactivated successfully.
Nov 24 18:20:49 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 18:20:49 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 18:20:49 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 18:20:49 compute-0 sudo[90750]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:49 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:20:49 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:49 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:20:49 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:49 compute-0 sudo[91332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:49 compute-0 sudo[91332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:49 compute-0 sudo[91332]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:49 compute-0 sudo[91357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:20:49 compute-0 sudo[91357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:49 compute-0 sudo[91357]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:49 compute-0 sudo[91382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:49 compute-0 sudo[91382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:49 compute-0 sudo[91382]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:49 compute-0 sudo[91407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:20:49 compute-0 sudo[91407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:49 compute-0 sudo[91407]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:49 compute-0 sudo[91432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:49 compute-0 sudo[91432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:49 compute-0 sudo[91432]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:49 compute-0 sudo[91457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 24 18:20:49 compute-0 sudo[91457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:50 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Nov 24 18:20:50 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e17 e17: 3 total, 2 up, 3 in
Nov 24 18:20:50 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 2 up, 3 in
Nov 24 18:20:50 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 18:20:50 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 18:20:50 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 18:20:50 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 17 pg[1.0( empty local-lis/les=16/17 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=16) [1] r=0 lpr=16 pi=[13,16)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:20:50 compute-0 ceph-mgr[75218]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2111577097; not ready for session (expect reconnect)
Nov 24 18:20:50 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 18:20:50 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 18:20:50 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 18:20:50 compute-0 ceph-mon[74927]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 24 18:20:50 compute-0 ceph-mon[74927]: Cluster is now healthy
Nov 24 18:20:50 compute-0 ceph-mon[74927]: from='osd.2 [v2:192.168.122.100:6810/2111577097,v1:192.168.122.100:6811/2111577097]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 24 18:20:50 compute-0 ceph-mon[74927]: osd.1 [v2:192.168.122.100:6806/1623794735,v1:192.168.122.100:6807/1623794735] boot
Nov 24 18:20:50 compute-0 ceph-mon[74927]: osdmap e16: 3 total, 2 up, 3 in
Nov 24 18:20:50 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 18:20:50 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 18:20:50 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 18:20:50 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:50 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:50 compute-0 ceph-mon[74927]: osdmap e17: 3 total, 2 up, 3 in
Nov 24 18:20:50 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 18:20:50 compute-0 ceph-mgr[75218]: [devicehealth INFO root] creating main.db for devicehealth
Nov 24 18:20:50 compute-0 podman[91555]: 2025-11-24 18:20:50.392772388 +0000 UTC m=+0.079425247 container exec 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 18:20:50 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v46: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 24 18:20:50 compute-0 podman[91555]: 2025-11-24 18:20:50.491287006 +0000 UTC m=+0.177939845 container exec_died 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Nov 24 18:20:50 compute-0 ceph-mgr[75218]: [devicehealth INFO root] Check health
Nov 24 18:20:50 compute-0 ceph-mgr[75218]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Nov 24 18:20:50 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 24 18:20:50 compute-0 sudo[91597]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Nov 24 18:20:50 compute-0 sudo[91597]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 24 18:20:50 compute-0 sudo[91597]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Nov 24 18:20:50 compute-0 sudo[91597]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:50 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 24 18:20:50 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 24 18:20:50 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 24 18:20:50 compute-0 sudo[91457]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:50 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:20:50 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:50 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:20:51 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:51 compute-0 sudo[91688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:51 compute-0 sudo[91688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:51 compute-0 sudo[91688]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:51 compute-0 sudo[91713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:20:51 compute-0 sudo[91713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:51 compute-0 sudo[91713]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:51 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Nov 24 18:20:51 compute-0 sudo[91738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:51 compute-0 sudo[91738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:51 compute-0 sudo[91738]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:51 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e18 e18: 3 total, 2 up, 3 in
Nov 24 18:20:51 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 2 up, 3 in
Nov 24 18:20:51 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 18:20:51 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 18:20:51 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 18:20:51 compute-0 ceph-mgr[75218]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2111577097; not ready for session (expect reconnect)
Nov 24 18:20:51 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 18:20:51 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 18:20:51 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 18:20:51 compute-0 sudo[91763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 18:20:51 compute-0 sudo[91763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:51 compute-0 ceph-mon[74927]: purged_snaps scrub starts
Nov 24 18:20:51 compute-0 ceph-mon[74927]: purged_snaps scrub ok
Nov 24 18:20:51 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 18:20:51 compute-0 ceph-mon[74927]: pgmap v46: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 24 18:20:51 compute-0 ceph-mon[74927]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 24 18:20:51 compute-0 ceph-mon[74927]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 24 18:20:51 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 24 18:20:51 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:51 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:51 compute-0 ceph-mon[74927]: osdmap e18: 3 total, 2 up, 3 in
Nov 24 18:20:51 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 18:20:51 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 18:20:51 compute-0 sudo[91763]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:51 compute-0 sudo[91818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:51 compute-0 sudo[91818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:51 compute-0 sudo[91818]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:51 compute-0 sudo[91843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:20:51 compute-0 sudo[91843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:51 compute-0 sudo[91843]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:51 compute-0 sudo[91868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:51 compute-0 sudo[91868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:52 compute-0 sudo[91868]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:52 compute-0 sudo[91893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- inventory --format=json-pretty --filter-for-batch
Nov 24 18:20:52 compute-0 sudo[91893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:52 compute-0 ceph-mgr[75218]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2111577097; not ready for session (expect reconnect)
Nov 24 18:20:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 18:20:52 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 18:20:52 compute-0 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 18:20:52 compute-0 ceph-osd[90655]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 33.600 iops: 8601.706 elapsed_sec: 0.349
Nov 24 18:20:52 compute-0 ceph-osd[90655]: log_channel(cluster) log [WRN] : OSD bench result of 8601.705863 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 24 18:20:52 compute-0 ceph-osd[90655]: osd.2 0 waiting for initial osdmap
Nov 24 18:20:52 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2[90651]: 2025-11-24T18:20:52.271+0000 7f3fbbe4f640 -1 osd.2 0 waiting for initial osdmap
Nov 24 18:20:52 compute-0 ceph-osd[90655]: osd.2 18 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 24 18:20:52 compute-0 ceph-osd[90655]: osd.2 18 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 24 18:20:52 compute-0 ceph-osd[90655]: osd.2 18 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 24 18:20:52 compute-0 ceph-osd[90655]: osd.2 18 check_osdmap_features require_osd_release unknown -> reef
Nov 24 18:20:52 compute-0 ceph-osd[90655]: osd.2 18 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 24 18:20:52 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2[90651]: 2025-11-24T18:20:52.301+0000 7f3fb7477640 -1 osd.2 18 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 24 18:20:52 compute-0 ceph-osd[90655]: osd.2 18 set_numa_affinity not setting numa affinity
Nov 24 18:20:52 compute-0 ceph-osd[90655]: osd.2 18 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial
Nov 24 18:20:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Nov 24 18:20:52 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.dfqptp(active, since 78s)
Nov 24 18:20:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Nov 24 18:20:52 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 18:20:52 compute-0 ceph-mon[74927]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/2111577097,v1:192.168.122.100:6811/2111577097] boot
Nov 24 18:20:52 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Nov 24 18:20:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 18:20:52 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 18:20:52 compute-0 ceph-osd[90655]: osd.2 19 state: booting -> active
Nov 24 18:20:52 compute-0 podman[91960]: 2025-11-24 18:20:52.43318039 +0000 UTC m=+0.048085986 container create bea56f8f48679a31074a5f47ea411add2b07d505a119bb4dfd6ff353c1c50739 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 18:20:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e19 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:20:52 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v49: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 24 18:20:52 compute-0 systemd[1]: Started libpod-conmon-bea56f8f48679a31074a5f47ea411add2b07d505a119bb4dfd6ff353c1c50739.scope.
Nov 24 18:20:52 compute-0 podman[91960]: 2025-11-24 18:20:52.413627216 +0000 UTC m=+0.028532822 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:20:52 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:20:52 compute-0 podman[91960]: 2025-11-24 18:20:52.529755609 +0000 UTC m=+0.144661225 container init bea56f8f48679a31074a5f47ea411add2b07d505a119bb4dfd6ff353c1c50739 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_rubin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:20:52 compute-0 podman[91960]: 2025-11-24 18:20:52.539004452 +0000 UTC m=+0.153910038 container start bea56f8f48679a31074a5f47ea411add2b07d505a119bb4dfd6ff353c1c50739 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_rubin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:20:52 compute-0 podman[91960]: 2025-11-24 18:20:52.542776007 +0000 UTC m=+0.157681623 container attach bea56f8f48679a31074a5f47ea411add2b07d505a119bb4dfd6ff353c1c50739 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_rubin, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:20:52 compute-0 blissful_rubin[91977]: 167 167
Nov 24 18:20:52 compute-0 systemd[1]: libpod-bea56f8f48679a31074a5f47ea411add2b07d505a119bb4dfd6ff353c1c50739.scope: Deactivated successfully.
Nov 24 18:20:52 compute-0 podman[91960]: 2025-11-24 18:20:52.54524145 +0000 UTC m=+0.160147046 container died bea56f8f48679a31074a5f47ea411add2b07d505a119bb4dfd6ff353c1c50739 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 24 18:20:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-424f0cfb8047e88bd237df49aad6fbdc18081e6fa9769c57703a93f45def1a86-merged.mount: Deactivated successfully.
Nov 24 18:20:52 compute-0 podman[91960]: 2025-11-24 18:20:52.577841583 +0000 UTC m=+0.192747159 container remove bea56f8f48679a31074a5f47ea411add2b07d505a119bb4dfd6ff353c1c50739 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:20:52 compute-0 systemd[1]: libpod-conmon-bea56f8f48679a31074a5f47ea411add2b07d505a119bb4dfd6ff353c1c50739.scope: Deactivated successfully.
Nov 24 18:20:52 compute-0 podman[91999]: 2025-11-24 18:20:52.774873959 +0000 UTC m=+0.078296148 container create 271188e4b2b97ecc07337523b2c0faed27d75f243567097df89c4ff3e7be49f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_williams, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 24 18:20:52 compute-0 systemd[1]: Started libpod-conmon-271188e4b2b97ecc07337523b2c0faed27d75f243567097df89c4ff3e7be49f4.scope.
Nov 24 18:20:52 compute-0 podman[91999]: 2025-11-24 18:20:52.748628416 +0000 UTC m=+0.052050645 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:20:52 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:20:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e4568dfb82019716305bd8de0361a8cbb222f3bed44521e7625995797c52810/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e4568dfb82019716305bd8de0361a8cbb222f3bed44521e7625995797c52810/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e4568dfb82019716305bd8de0361a8cbb222f3bed44521e7625995797c52810/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e4568dfb82019716305bd8de0361a8cbb222f3bed44521e7625995797c52810/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:52 compute-0 podman[91999]: 2025-11-24 18:20:52.86162533 +0000 UTC m=+0.165047499 container init 271188e4b2b97ecc07337523b2c0faed27d75f243567097df89c4ff3e7be49f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 18:20:52 compute-0 podman[91999]: 2025-11-24 18:20:52.867917339 +0000 UTC m=+0.171339508 container start 271188e4b2b97ecc07337523b2c0faed27d75f243567097df89c4ff3e7be49f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_williams, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 18:20:52 compute-0 podman[91999]: 2025-11-24 18:20:52.871622853 +0000 UTC m=+0.175045022 container attach 271188e4b2b97ecc07337523b2c0faed27d75f243567097df89c4ff3e7be49f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:20:53 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Nov 24 18:20:53 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Nov 24 18:20:53 compute-0 ceph-mon[74927]: OSD bench result of 8601.705863 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 24 18:20:53 compute-0 ceph-mon[74927]: mgrmap e9: compute-0.dfqptp(active, since 78s)
Nov 24 18:20:53 compute-0 ceph-mon[74927]: osd.2 [v2:192.168.122.100:6810/2111577097,v1:192.168.122.100:6811/2111577097] boot
Nov 24 18:20:53 compute-0 ceph-mon[74927]: osdmap e19: 3 total, 3 up, 3 in
Nov 24 18:20:53 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 18:20:53 compute-0 ceph-mon[74927]: pgmap v49: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 24 18:20:53 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Nov 24 18:20:54 compute-0 ceph-mon[74927]: osdmap e20: 3 total, 3 up, 3 in
Nov 24 18:20:54 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:20:54 compute-0 charming_williams[92015]: [
Nov 24 18:20:54 compute-0 charming_williams[92015]:     {
Nov 24 18:20:54 compute-0 charming_williams[92015]:         "available": false,
Nov 24 18:20:54 compute-0 charming_williams[92015]:         "ceph_device": false,
Nov 24 18:20:54 compute-0 charming_williams[92015]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 24 18:20:54 compute-0 charming_williams[92015]:         "lsm_data": {},
Nov 24 18:20:54 compute-0 charming_williams[92015]:         "lvs": [],
Nov 24 18:20:54 compute-0 charming_williams[92015]:         "path": "/dev/sr0",
Nov 24 18:20:54 compute-0 charming_williams[92015]:         "rejected_reasons": [
Nov 24 18:20:54 compute-0 charming_williams[92015]:             "Has a FileSystem",
Nov 24 18:20:54 compute-0 charming_williams[92015]:             "Insufficient space (<5GB)"
Nov 24 18:20:54 compute-0 charming_williams[92015]:         ],
Nov 24 18:20:54 compute-0 charming_williams[92015]:         "sys_api": {
Nov 24 18:20:54 compute-0 charming_williams[92015]:             "actuators": null,
Nov 24 18:20:54 compute-0 charming_williams[92015]:             "device_nodes": "sr0",
Nov 24 18:20:54 compute-0 charming_williams[92015]:             "devname": "sr0",
Nov 24 18:20:54 compute-0 charming_williams[92015]:             "human_readable_size": "482.00 KB",
Nov 24 18:20:54 compute-0 charming_williams[92015]:             "id_bus": "ata",
Nov 24 18:20:54 compute-0 charming_williams[92015]:             "model": "QEMU DVD-ROM",
Nov 24 18:20:54 compute-0 charming_williams[92015]:             "nr_requests": "2",
Nov 24 18:20:54 compute-0 charming_williams[92015]:             "parent": "/dev/sr0",
Nov 24 18:20:54 compute-0 charming_williams[92015]:             "partitions": {},
Nov 24 18:20:54 compute-0 charming_williams[92015]:             "path": "/dev/sr0",
Nov 24 18:20:54 compute-0 charming_williams[92015]:             "removable": "1",
Nov 24 18:20:54 compute-0 charming_williams[92015]:             "rev": "2.5+",
Nov 24 18:20:54 compute-0 charming_williams[92015]:             "ro": "0",
Nov 24 18:20:54 compute-0 charming_williams[92015]:             "rotational": "1",
Nov 24 18:20:54 compute-0 charming_williams[92015]:             "sas_address": "",
Nov 24 18:20:54 compute-0 charming_williams[92015]:             "sas_device_handle": "",
Nov 24 18:20:54 compute-0 charming_williams[92015]:             "scheduler_mode": "mq-deadline",
Nov 24 18:20:54 compute-0 charming_williams[92015]:             "sectors": 0,
Nov 24 18:20:54 compute-0 charming_williams[92015]:             "sectorsize": "2048",
Nov 24 18:20:54 compute-0 charming_williams[92015]:             "size": 493568.0,
Nov 24 18:20:54 compute-0 charming_williams[92015]:             "support_discard": "2048",
Nov 24 18:20:54 compute-0 charming_williams[92015]:             "type": "disk",
Nov 24 18:20:54 compute-0 charming_williams[92015]:             "vendor": "QEMU"
Nov 24 18:20:54 compute-0 charming_williams[92015]:         }
Nov 24 18:20:54 compute-0 charming_williams[92015]:     }
Nov 24 18:20:54 compute-0 charming_williams[92015]: ]
Nov 24 18:20:54 compute-0 systemd[1]: libpod-271188e4b2b97ecc07337523b2c0faed27d75f243567097df89c4ff3e7be49f4.scope: Deactivated successfully.
Nov 24 18:20:54 compute-0 systemd[1]: libpod-271188e4b2b97ecc07337523b2c0faed27d75f243567097df89c4ff3e7be49f4.scope: Consumed 1.699s CPU time.
Nov 24 18:20:54 compute-0 podman[93983]: 2025-11-24 18:20:54.555771967 +0000 UTC m=+0.034049881 container died 271188e4b2b97ecc07337523b2c0faed27d75f243567097df89c4ff3e7be49f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_williams, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:20:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e4568dfb82019716305bd8de0361a8cbb222f3bed44521e7625995797c52810-merged.mount: Deactivated successfully.
Nov 24 18:20:54 compute-0 podman[93983]: 2025-11-24 18:20:54.691068965 +0000 UTC m=+0.169346799 container remove 271188e4b2b97ecc07337523b2c0faed27d75f243567097df89c4ff3e7be49f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_williams, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:20:54 compute-0 systemd[1]: libpod-conmon-271188e4b2b97ecc07337523b2c0faed27d75f243567097df89c4ff3e7be49f4.scope: Deactivated successfully.
Nov 24 18:20:54 compute-0 sudo[91893]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:54 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:20:54 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:54 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:20:54 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:54 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Nov 24 18:20:54 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 24 18:20:54 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Nov 24 18:20:54 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 24 18:20:54 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Nov 24 18:20:54 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 24 18:20:54 compute-0 ceph-mgr[75218]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43690k
Nov 24 18:20:54 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43690k
Nov 24 18:20:54 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Nov 24 18:20:54 compute-0 ceph-mgr[75218]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Nov 24 18:20:54 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Nov 24 18:20:54 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:20:54 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:20:54 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:20:54 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:20:54 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:20:54 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:54 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 745cb863-6dfb-4127-9f4c-325ddf584928 does not exist
Nov 24 18:20:54 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 12fab4d6-a8f4-45dd-ac72-fbdc0fa88dcb does not exist
Nov 24 18:20:54 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 280ab2e1-980d-4137-b556-bb21a89a5da2 does not exist
Nov 24 18:20:54 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 18:20:54 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:20:54 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 18:20:54 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:20:54 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:20:54 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:20:54 compute-0 sudo[93998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:54 compute-0 sudo[93998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:54 compute-0 sudo[93998]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:54 compute-0 sudo[94023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:20:54 compute-0 sudo[94023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:54 compute-0 sudo[94023]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:54 compute-0 sudo[94048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:54 compute-0 sudo[94048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:54 compute-0 sudo[94048]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:55 compute-0 sudo[94073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 18:20:55 compute-0 sudo[94073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:55 compute-0 podman[94138]: 2025-11-24 18:20:55.415204793 +0000 UTC m=+0.069878786 container create ca049e8ff7f98f2df01ca95bbdd308791de852534b6b69d142373ba66cda9ad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ritchie, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 18:20:55 compute-0 systemd[1]: Started libpod-conmon-ca049e8ff7f98f2df01ca95bbdd308791de852534b6b69d142373ba66cda9ad8.scope.
Nov 24 18:20:55 compute-0 podman[94138]: 2025-11-24 18:20:55.375123901 +0000 UTC m=+0.029797944 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:20:55 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:20:55 compute-0 podman[94138]: 2025-11-24 18:20:55.4926901 +0000 UTC m=+0.147364153 container init ca049e8ff7f98f2df01ca95bbdd308791de852534b6b69d142373ba66cda9ad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ritchie, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:20:55 compute-0 podman[94138]: 2025-11-24 18:20:55.502653622 +0000 UTC m=+0.157327625 container start ca049e8ff7f98f2df01ca95bbdd308791de852534b6b69d142373ba66cda9ad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 18:20:55 compute-0 podman[94138]: 2025-11-24 18:20:55.506749165 +0000 UTC m=+0.161423168 container attach ca049e8ff7f98f2df01ca95bbdd308791de852534b6b69d142373ba66cda9ad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:20:55 compute-0 stoic_ritchie[94155]: 167 167
Nov 24 18:20:55 compute-0 systemd[1]: libpod-ca049e8ff7f98f2df01ca95bbdd308791de852534b6b69d142373ba66cda9ad8.scope: Deactivated successfully.
Nov 24 18:20:55 compute-0 podman[94138]: 2025-11-24 18:20:55.511496915 +0000 UTC m=+0.166170938 container died ca049e8ff7f98f2df01ca95bbdd308791de852534b6b69d142373ba66cda9ad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ritchie, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:20:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-5733c1a78f6abf21470e3c5fdf022e65b528b076a1c8a12ceed1854114ac3b73-merged.mount: Deactivated successfully.
Nov 24 18:20:55 compute-0 podman[94138]: 2025-11-24 18:20:55.555696231 +0000 UTC m=+0.210370254 container remove ca049e8ff7f98f2df01ca95bbdd308791de852534b6b69d142373ba66cda9ad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:20:55 compute-0 systemd[1]: libpod-conmon-ca049e8ff7f98f2df01ca95bbdd308791de852534b6b69d142373ba66cda9ad8.scope: Deactivated successfully.
Nov 24 18:20:55 compute-0 podman[94178]: 2025-11-24 18:20:55.759282673 +0000 UTC m=+0.055702188 container create f6d19fa10d4f7998821132a5a6c464fb32064df004e4001ac56efbe7c60f1b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hawking, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 18:20:55 compute-0 ceph-mon[74927]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:20:55 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:55 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:55 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 24 18:20:55 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 24 18:20:55 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 24 18:20:55 compute-0 ceph-mon[74927]: Adjusting osd_memory_target on compute-0 to 43690k
Nov 24 18:20:55 compute-0 ceph-mon[74927]: Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Nov 24 18:20:55 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:20:55 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:20:55 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:20:55 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:20:55 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:20:55 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:20:55 compute-0 podman[94178]: 2025-11-24 18:20:55.727987643 +0000 UTC m=+0.024407168 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:20:55 compute-0 systemd[1]: Started libpod-conmon-f6d19fa10d4f7998821132a5a6c464fb32064df004e4001ac56efbe7c60f1b21.scope.
Nov 24 18:20:55 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:20:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4e3b05c6f60e6c5bae49a88ae7940b6744733b29cb273d9db65451fdc8c27ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4e3b05c6f60e6c5bae49a88ae7940b6744733b29cb273d9db65451fdc8c27ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4e3b05c6f60e6c5bae49a88ae7940b6744733b29cb273d9db65451fdc8c27ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4e3b05c6f60e6c5bae49a88ae7940b6744733b29cb273d9db65451fdc8c27ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4e3b05c6f60e6c5bae49a88ae7940b6744733b29cb273d9db65451fdc8c27ad/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:55 compute-0 podman[94178]: 2025-11-24 18:20:55.878403992 +0000 UTC m=+0.174823577 container init f6d19fa10d4f7998821132a5a6c464fb32064df004e4001ac56efbe7c60f1b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hawking, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 18:20:55 compute-0 podman[94178]: 2025-11-24 18:20:55.889771439 +0000 UTC m=+0.186190924 container start f6d19fa10d4f7998821132a5a6c464fb32064df004e4001ac56efbe7c60f1b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hawking, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 24 18:20:55 compute-0 podman[94178]: 2025-11-24 18:20:55.893328079 +0000 UTC m=+0.189747604 container attach f6d19fa10d4f7998821132a5a6c464fb32064df004e4001ac56efbe7c60f1b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:20:56 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:20:56 compute-0 vibrant_hawking[94194]: --> passed data devices: 0 physical, 3 LVM
Nov 24 18:20:56 compute-0 vibrant_hawking[94194]: --> relative data size: 1.0
Nov 24 18:20:56 compute-0 vibrant_hawking[94194]: --> All data devices are unavailable
Nov 24 18:20:56 compute-0 systemd[1]: libpod-f6d19fa10d4f7998821132a5a6c464fb32064df004e4001ac56efbe7c60f1b21.scope: Deactivated successfully.
Nov 24 18:20:56 compute-0 systemd[1]: libpod-f6d19fa10d4f7998821132a5a6c464fb32064df004e4001ac56efbe7c60f1b21.scope: Consumed 1.049s CPU time.
Nov 24 18:20:56 compute-0 conmon[94194]: conmon f6d19fa10d4f79988211 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f6d19fa10d4f7998821132a5a6c464fb32064df004e4001ac56efbe7c60f1b21.scope/container/memory.events
Nov 24 18:20:56 compute-0 podman[94178]: 2025-11-24 18:20:56.986817245 +0000 UTC m=+1.283236730 container died f6d19fa10d4f7998821132a5a6c464fb32064df004e4001ac56efbe7c60f1b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hawking, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 24 18:20:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4e3b05c6f60e6c5bae49a88ae7940b6744733b29cb273d9db65451fdc8c27ad-merged.mount: Deactivated successfully.
Nov 24 18:20:57 compute-0 podman[94178]: 2025-11-24 18:20:57.049487887 +0000 UTC m=+1.345907372 container remove f6d19fa10d4f7998821132a5a6c464fb32064df004e4001ac56efbe7c60f1b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hawking, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 18:20:57 compute-0 systemd[1]: libpod-conmon-f6d19fa10d4f7998821132a5a6c464fb32064df004e4001ac56efbe7c60f1b21.scope: Deactivated successfully.
Nov 24 18:20:57 compute-0 sudo[94073]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:57 compute-0 sudo[94234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:57 compute-0 sudo[94234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:57 compute-0 sudo[94234]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:57 compute-0 sudo[94259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:20:57 compute-0 sudo[94259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:57 compute-0 sudo[94259]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:57 compute-0 sudo[94284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:57 compute-0 sudo[94284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:57 compute-0 sudo[94284]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:57 compute-0 sudo[94309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- lvm list --format json
Nov 24 18:20:57 compute-0 sudo[94309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e20 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:20:57 compute-0 podman[94374]: 2025-11-24 18:20:57.716084383 +0000 UTC m=+0.043597722 container create 2ffbb208c305cdb6c09feb7ce01d51322dc75fd1a4ded0d32cf765f25684b67a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hawking, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Nov 24 18:20:57 compute-0 systemd[1]: Started libpod-conmon-2ffbb208c305cdb6c09feb7ce01d51322dc75fd1a4ded0d32cf765f25684b67a.scope.
Nov 24 18:20:57 compute-0 ceph-mon[74927]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:20:57 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:20:57 compute-0 podman[94374]: 2025-11-24 18:20:57.698482718 +0000 UTC m=+0.025996057 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:20:57 compute-0 podman[94374]: 2025-11-24 18:20:57.794241817 +0000 UTC m=+0.121755156 container init 2ffbb208c305cdb6c09feb7ce01d51322dc75fd1a4ded0d32cf765f25684b67a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:20:57 compute-0 podman[94374]: 2025-11-24 18:20:57.800222438 +0000 UTC m=+0.127735767 container start 2ffbb208c305cdb6c09feb7ce01d51322dc75fd1a4ded0d32cf765f25684b67a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hawking, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:20:57 compute-0 great_hawking[94390]: 167 167
Nov 24 18:20:57 compute-0 systemd[1]: libpod-2ffbb208c305cdb6c09feb7ce01d51322dc75fd1a4ded0d32cf765f25684b67a.scope: Deactivated successfully.
Nov 24 18:20:57 compute-0 podman[94374]: 2025-11-24 18:20:57.808376944 +0000 UTC m=+0.135890353 container attach 2ffbb208c305cdb6c09feb7ce01d51322dc75fd1a4ded0d32cf765f25684b67a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 18:20:57 compute-0 podman[94374]: 2025-11-24 18:20:57.80940463 +0000 UTC m=+0.136917989 container died 2ffbb208c305cdb6c09feb7ce01d51322dc75fd1a4ded0d32cf765f25684b67a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:20:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc9486e0d7c77f0fa427488f88510db8536c59787d286d7944b16421b0f654c6-merged.mount: Deactivated successfully.
Nov 24 18:20:57 compute-0 podman[94374]: 2025-11-24 18:20:57.846959328 +0000 UTC m=+0.174472657 container remove 2ffbb208c305cdb6c09feb7ce01d51322dc75fd1a4ded0d32cf765f25684b67a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hawking, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 24 18:20:57 compute-0 systemd[1]: libpod-conmon-2ffbb208c305cdb6c09feb7ce01d51322dc75fd1a4ded0d32cf765f25684b67a.scope: Deactivated successfully.
Nov 24 18:20:58 compute-0 podman[94414]: 2025-11-24 18:20:58.02519249 +0000 UTC m=+0.046829104 container create 54cd14fe99619bb13cd56ff0874dac6b123002c3ea546f25652f580954bcf27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mendel, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:20:58 compute-0 systemd[1]: Started libpod-conmon-54cd14fe99619bb13cd56ff0874dac6b123002c3ea546f25652f580954bcf27c.scope.
Nov 24 18:20:58 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:20:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd3161d93e5ce3ec26c719ae502444e19f4f486c690bb75783a8466ad467457f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd3161d93e5ce3ec26c719ae502444e19f4f486c690bb75783a8466ad467457f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd3161d93e5ce3ec26c719ae502444e19f4f486c690bb75783a8466ad467457f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd3161d93e5ce3ec26c719ae502444e19f4f486c690bb75783a8466ad467457f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:20:58 compute-0 podman[94414]: 2025-11-24 18:20:58.007715078 +0000 UTC m=+0.029351712 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:20:58 compute-0 podman[94414]: 2025-11-24 18:20:58.106101043 +0000 UTC m=+0.127737657 container init 54cd14fe99619bb13cd56ff0874dac6b123002c3ea546f25652f580954bcf27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:20:58 compute-0 podman[94414]: 2025-11-24 18:20:58.113773537 +0000 UTC m=+0.135410151 container start 54cd14fe99619bb13cd56ff0874dac6b123002c3ea546f25652f580954bcf27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:20:58 compute-0 podman[94414]: 2025-11-24 18:20:58.117183913 +0000 UTC m=+0.138820537 container attach 54cd14fe99619bb13cd56ff0874dac6b123002c3ea546f25652f580954bcf27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mendel, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 24 18:20:58 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:20:58 compute-0 priceless_mendel[94430]: {
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:     "0": [
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:         {
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:             "devices": [
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:                 "/dev/loop3"
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:             ],
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:             "lv_name": "ceph_lv0",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:             "lv_size": "21470642176",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:             "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:             "name": "ceph_lv0",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:             "tags": {
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:                 "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:                 "ceph.cluster_name": "ceph",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:                 "ceph.crush_device_class": "",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:                 "ceph.encrypted": "0",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:                 "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:                 "ceph.osd_id": "0",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:                 "ceph.type": "block",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:                 "ceph.vdo": "0"
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:             },
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:             "type": "block",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:             "vg_name": "ceph_vg0"
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:         }
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:     ],
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:     "1": [
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:         {
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:             "devices": [
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:                 "/dev/loop4"
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:             ],
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:             "lv_name": "ceph_lv1",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:             "lv_size": "21470642176",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:             "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:             "name": "ceph_lv1",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:             "tags": {
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:                 "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:                 "ceph.cluster_name": "ceph",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:                 "ceph.crush_device_class": "",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:                 "ceph.encrypted": "0",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:                 "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:                 "ceph.osd_id": "1",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:                 "ceph.type": "block",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:                 "ceph.vdo": "0"
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:             },
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:             "type": "block",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:             "vg_name": "ceph_vg1"
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:         }
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:     ],
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:     "2": [
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:         {
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:             "devices": [
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:                 "/dev/loop5"
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:             ],
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:             "lv_name": "ceph_lv2",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:             "lv_size": "21470642176",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:             "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:             "name": "ceph_lv2",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:             "tags": {
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:                 "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:                 "ceph.cluster_name": "ceph",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:                 "ceph.crush_device_class": "",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:                 "ceph.encrypted": "0",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:                 "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:                 "ceph.osd_id": "2",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:                 "ceph.type": "block",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:                 "ceph.vdo": "0"
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:             },
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:             "type": "block",
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:             "vg_name": "ceph_vg2"
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:         }
Nov 24 18:20:58 compute-0 priceless_mendel[94430]:     ]
Nov 24 18:20:58 compute-0 priceless_mendel[94430]: }
Nov 24 18:20:58 compute-0 systemd[1]: libpod-54cd14fe99619bb13cd56ff0874dac6b123002c3ea546f25652f580954bcf27c.scope: Deactivated successfully.
Nov 24 18:20:58 compute-0 podman[94414]: 2025-11-24 18:20:58.906740674 +0000 UTC m=+0.928377308 container died 54cd14fe99619bb13cd56ff0874dac6b123002c3ea546f25652f580954bcf27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mendel, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:20:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd3161d93e5ce3ec26c719ae502444e19f4f486c690bb75783a8466ad467457f-merged.mount: Deactivated successfully.
Nov 24 18:20:59 compute-0 podman[94414]: 2025-11-24 18:20:59.491479872 +0000 UTC m=+1.513116486 container remove 54cd14fe99619bb13cd56ff0874dac6b123002c3ea546f25652f580954bcf27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mendel, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:20:59 compute-0 systemd[1]: libpod-conmon-54cd14fe99619bb13cd56ff0874dac6b123002c3ea546f25652f580954bcf27c.scope: Deactivated successfully.
Nov 24 18:20:59 compute-0 sudo[94309]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:59 compute-0 sudo[94453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:59 compute-0 sudo[94453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:59 compute-0 sudo[94453]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:59 compute-0 sudo[94478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:20:59 compute-0 sudo[94478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:59 compute-0 sudo[94478]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:59 compute-0 sudo[94503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:20:59 compute-0 sudo[94503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:59 compute-0 sudo[94503]: pam_unix(sudo:session): session closed for user root
Nov 24 18:20:59 compute-0 sudo[94528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- raw list --format json
Nov 24 18:20:59 compute-0 sudo[94528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:20:59 compute-0 ceph-mon[74927]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:00 compute-0 podman[94594]: 2025-11-24 18:21:00.134676666 +0000 UTC m=+0.045899541 container create b394f2aced11e9764448e12505d5f542ce07daff3a48933cf57f09172d25da1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shaw, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:21:00 compute-0 systemd[1]: Started libpod-conmon-b394f2aced11e9764448e12505d5f542ce07daff3a48933cf57f09172d25da1f.scope.
Nov 24 18:21:00 compute-0 podman[94594]: 2025-11-24 18:21:00.118486187 +0000 UTC m=+0.029709092 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:21:00 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:00 compute-0 podman[94594]: 2025-11-24 18:21:00.230754182 +0000 UTC m=+0.141977067 container init b394f2aced11e9764448e12505d5f542ce07daff3a48933cf57f09172d25da1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shaw, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:21:00 compute-0 podman[94594]: 2025-11-24 18:21:00.238293032 +0000 UTC m=+0.149515907 container start b394f2aced11e9764448e12505d5f542ce07daff3a48933cf57f09172d25da1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 18:21:00 compute-0 podman[94594]: 2025-11-24 18:21:00.241501003 +0000 UTC m=+0.152723898 container attach b394f2aced11e9764448e12505d5f542ce07daff3a48933cf57f09172d25da1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shaw, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 24 18:21:00 compute-0 charming_shaw[94610]: 167 167
Nov 24 18:21:00 compute-0 systemd[1]: libpod-b394f2aced11e9764448e12505d5f542ce07daff3a48933cf57f09172d25da1f.scope: Deactivated successfully.
Nov 24 18:21:00 compute-0 podman[94615]: 2025-11-24 18:21:00.288310766 +0000 UTC m=+0.027970958 container died b394f2aced11e9764448e12505d5f542ce07daff3a48933cf57f09172d25da1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shaw, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 18:21:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-903974524d167e48878049e1ed17f7be1d1d0f52e856d21c7b70676e9c325467-merged.mount: Deactivated successfully.
Nov 24 18:21:00 compute-0 podman[94615]: 2025-11-24 18:21:00.323931585 +0000 UTC m=+0.063591727 container remove b394f2aced11e9764448e12505d5f542ce07daff3a48933cf57f09172d25da1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shaw, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:21:00 compute-0 systemd[1]: libpod-conmon-b394f2aced11e9764448e12505d5f542ce07daff3a48933cf57f09172d25da1f.scope: Deactivated successfully.
Nov 24 18:21:00 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:00 compute-0 podman[94637]: 2025-11-24 18:21:00.54107714 +0000 UTC m=+0.062151641 container create fbc3cec5810c12314ca0dcd8454f5e3ea3927282807604aa1ddf9b0f449ef64c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swartz, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:21:00 compute-0 systemd[1]: Started libpod-conmon-fbc3cec5810c12314ca0dcd8454f5e3ea3927282807604aa1ddf9b0f449ef64c.scope.
Nov 24 18:21:00 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cfcebcc329f38eb84e03f6b71987cbd8d453821bd5d78d1084e6d1800488d67/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cfcebcc329f38eb84e03f6b71987cbd8d453821bd5d78d1084e6d1800488d67/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cfcebcc329f38eb84e03f6b71987cbd8d453821bd5d78d1084e6d1800488d67/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cfcebcc329f38eb84e03f6b71987cbd8d453821bd5d78d1084e6d1800488d67/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:00 compute-0 podman[94637]: 2025-11-24 18:21:00.520319245 +0000 UTC m=+0.041393756 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:21:00 compute-0 podman[94637]: 2025-11-24 18:21:00.62503093 +0000 UTC m=+0.146105481 container init fbc3cec5810c12314ca0dcd8454f5e3ea3927282807604aa1ddf9b0f449ef64c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 18:21:00 compute-0 podman[94637]: 2025-11-24 18:21:00.632893598 +0000 UTC m=+0.153968089 container start fbc3cec5810c12314ca0dcd8454f5e3ea3927282807604aa1ddf9b0f449ef64c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 18:21:00 compute-0 podman[94637]: 2025-11-24 18:21:00.637231398 +0000 UTC m=+0.158305929 container attach fbc3cec5810c12314ca0dcd8454f5e3ea3927282807604aa1ddf9b0f449ef64c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swartz, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 18:21:01 compute-0 jovial_swartz[94654]: {
Nov 24 18:21:01 compute-0 jovial_swartz[94654]:     "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 18:21:01 compute-0 jovial_swartz[94654]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:21:01 compute-0 jovial_swartz[94654]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 18:21:01 compute-0 jovial_swartz[94654]:         "osd_id": 0,
Nov 24 18:21:01 compute-0 jovial_swartz[94654]:         "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:21:01 compute-0 jovial_swartz[94654]:         "type": "bluestore"
Nov 24 18:21:01 compute-0 jovial_swartz[94654]:     },
Nov 24 18:21:01 compute-0 jovial_swartz[94654]:     "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 18:21:01 compute-0 jovial_swartz[94654]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:21:01 compute-0 jovial_swartz[94654]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 18:21:01 compute-0 jovial_swartz[94654]:         "osd_id": 1,
Nov 24 18:21:01 compute-0 jovial_swartz[94654]:         "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:21:01 compute-0 jovial_swartz[94654]:         "type": "bluestore"
Nov 24 18:21:01 compute-0 jovial_swartz[94654]:     },
Nov 24 18:21:01 compute-0 jovial_swartz[94654]:     "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 18:21:01 compute-0 jovial_swartz[94654]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:21:01 compute-0 jovial_swartz[94654]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 18:21:01 compute-0 jovial_swartz[94654]:         "osd_id": 2,
Nov 24 18:21:01 compute-0 jovial_swartz[94654]:         "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:21:01 compute-0 jovial_swartz[94654]:         "type": "bluestore"
Nov 24 18:21:01 compute-0 jovial_swartz[94654]:     }
Nov 24 18:21:01 compute-0 jovial_swartz[94654]: }
Nov 24 18:21:01 compute-0 systemd[1]: libpod-fbc3cec5810c12314ca0dcd8454f5e3ea3927282807604aa1ddf9b0f449ef64c.scope: Deactivated successfully.
Nov 24 18:21:01 compute-0 systemd[1]: libpod-fbc3cec5810c12314ca0dcd8454f5e3ea3927282807604aa1ddf9b0f449ef64c.scope: Consumed 1.136s CPU time.
Nov 24 18:21:01 compute-0 podman[94637]: 2025-11-24 18:21:01.759209795 +0000 UTC m=+1.280284296 container died fbc3cec5810c12314ca0dcd8454f5e3ea3927282807604aa1ddf9b0f449ef64c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:21:01 compute-0 ceph-mon[74927]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-0cfcebcc329f38eb84e03f6b71987cbd8d453821bd5d78d1084e6d1800488d67-merged.mount: Deactivated successfully.
Nov 24 18:21:01 compute-0 podman[94637]: 2025-11-24 18:21:01.827607212 +0000 UTC m=+1.348681713 container remove fbc3cec5810c12314ca0dcd8454f5e3ea3927282807604aa1ddf9b0f449ef64c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swartz, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 24 18:21:01 compute-0 systemd[1]: libpod-conmon-fbc3cec5810c12314ca0dcd8454f5e3ea3927282807604aa1ddf9b0f449ef64c.scope: Deactivated successfully.
Nov 24 18:21:01 compute-0 sudo[94528]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:01 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:21:01 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:01 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:21:01 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:01 compute-0 sudo[94698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:01 compute-0 sudo[94698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:01 compute-0 sudo[94698]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:02 compute-0 sudo[94723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:21:02 compute-0 sudo[94723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:02 compute-0 sudo[94723]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:02 compute-0 sudo[94748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:02 compute-0 sudo[94748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:02 compute-0 sudo[94748]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:02 compute-0 sudo[94773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:21:02 compute-0 sudo[94773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:02 compute-0 sudo[94773]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:02 compute-0 sudo[94798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:02 compute-0 sudo[94798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:02 compute-0 sudo[94798]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:02 compute-0 sudo[94823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 24 18:21:02 compute-0 sudo[94823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e20 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:21:02 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:02 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:02 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:02 compute-0 podman[94920]: 2025-11-24 18:21:02.919770945 +0000 UTC m=+0.087567102 container exec 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 24 18:21:03 compute-0 podman[94920]: 2025-11-24 18:21:03.004339571 +0000 UTC m=+0.172135638 container exec_died 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:21:03 compute-0 sudo[94823]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:03 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:21:03 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:03 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:21:03 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:03 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:21:03 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:21:03 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:21:03 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:21:03 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:21:03 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:03 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev e968221b-d0c6-464e-a60e-fa25faabb32e does not exist
Nov 24 18:21:03 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 24bfc914-75bd-4071-8406-6ef32f80062e does not exist
Nov 24 18:21:03 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev b7f294a3-2f49-4626-aaf9-c3fc39d81ade does not exist
Nov 24 18:21:03 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 18:21:03 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:21:03 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 18:21:03 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:21:03 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:21:03 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:21:03 compute-0 sudo[95041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:03 compute-0 sudo[95041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:03 compute-0 sudo[95041]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:03 compute-0 sudo[95066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:21:03 compute-0 sudo[95066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:03 compute-0 sudo[95066]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:03 compute-0 sudo[95091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:03 compute-0 sudo[95091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:03 compute-0 sudo[95091]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:03 compute-0 sudo[95116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 18:21:03 compute-0 sudo[95116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:03 compute-0 ceph-mon[74927]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:03 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:03 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:03 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:21:03 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:21:03 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:03 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:21:03 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:21:03 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:21:04 compute-0 podman[95181]: 2025-11-24 18:21:04.029550502 +0000 UTC m=+0.051821880 container create 6c7d36c9855fd30dc1ffe869388e55e81a8aa3ef8a1bd202a73fe9565700527f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bardeen, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 24 18:21:04 compute-0 systemd[1]: Started libpod-conmon-6c7d36c9855fd30dc1ffe869388e55e81a8aa3ef8a1bd202a73fe9565700527f.scope.
Nov 24 18:21:04 compute-0 podman[95181]: 2025-11-24 18:21:04.006041358 +0000 UTC m=+0.028312806 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:21:04 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:04 compute-0 podman[95181]: 2025-11-24 18:21:04.122799517 +0000 UTC m=+0.145070985 container init 6c7d36c9855fd30dc1ffe869388e55e81a8aa3ef8a1bd202a73fe9565700527f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Nov 24 18:21:04 compute-0 podman[95181]: 2025-11-24 18:21:04.131023965 +0000 UTC m=+0.153295333 container start 6c7d36c9855fd30dc1ffe869388e55e81a8aa3ef8a1bd202a73fe9565700527f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bardeen, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 24 18:21:04 compute-0 podman[95181]: 2025-11-24 18:21:04.134054081 +0000 UTC m=+0.156325489 container attach 6c7d36c9855fd30dc1ffe869388e55e81a8aa3ef8a1bd202a73fe9565700527f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:21:04 compute-0 tender_bardeen[95198]: 167 167
Nov 24 18:21:04 compute-0 systemd[1]: libpod-6c7d36c9855fd30dc1ffe869388e55e81a8aa3ef8a1bd202a73fe9565700527f.scope: Deactivated successfully.
Nov 24 18:21:04 compute-0 podman[95181]: 2025-11-24 18:21:04.13676537 +0000 UTC m=+0.159036738 container died 6c7d36c9855fd30dc1ffe869388e55e81a8aa3ef8a1bd202a73fe9565700527f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bardeen, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:21:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-f968aadb21685555ed9371a1030227bec24a3b259fc9b1722361f339d74bb873-merged.mount: Deactivated successfully.
Nov 24 18:21:04 compute-0 podman[95181]: 2025-11-24 18:21:04.180163666 +0000 UTC m=+0.202435044 container remove 6c7d36c9855fd30dc1ffe869388e55e81a8aa3ef8a1bd202a73fe9565700527f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bardeen, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 24 18:21:04 compute-0 systemd[1]: libpod-conmon-6c7d36c9855fd30dc1ffe869388e55e81a8aa3ef8a1bd202a73fe9565700527f.scope: Deactivated successfully.
Nov 24 18:21:04 compute-0 podman[95222]: 2025-11-24 18:21:04.346424945 +0000 UTC m=+0.054853526 container create b2f510f59292c25bfcd886ba84b01e155d1cb27b400cc9d1561205fae83b6f9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 24 18:21:04 compute-0 systemd[1]: Started libpod-conmon-b2f510f59292c25bfcd886ba84b01e155d1cb27b400cc9d1561205fae83b6f9d.scope.
Nov 24 18:21:04 compute-0 podman[95222]: 2025-11-24 18:21:04.326674586 +0000 UTC m=+0.035103417 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:21:04 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/010bf65b13ba035b054e7c05ee3ddeb6f840e5222014277e250b308c7dfa4fd5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/010bf65b13ba035b054e7c05ee3ddeb6f840e5222014277e250b308c7dfa4fd5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/010bf65b13ba035b054e7c05ee3ddeb6f840e5222014277e250b308c7dfa4fd5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/010bf65b13ba035b054e7c05ee3ddeb6f840e5222014277e250b308c7dfa4fd5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/010bf65b13ba035b054e7c05ee3ddeb6f840e5222014277e250b308c7dfa4fd5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:04 compute-0 podman[95222]: 2025-11-24 18:21:04.462800074 +0000 UTC m=+0.171228725 container init b2f510f59292c25bfcd886ba84b01e155d1cb27b400cc9d1561205fae83b6f9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cray, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:21:04 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:04 compute-0 podman[95222]: 2025-11-24 18:21:04.47334051 +0000 UTC m=+0.181769061 container start b2f510f59292c25bfcd886ba84b01e155d1cb27b400cc9d1561205fae83b6f9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cray, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:21:04 compute-0 podman[95222]: 2025-11-24 18:21:04.476748846 +0000 UTC m=+0.185177407 container attach b2f510f59292c25bfcd886ba84b01e155d1cb27b400cc9d1561205fae83b6f9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cray, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 24 18:21:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:21:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:21:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:21:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:21:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:21:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:21:05 compute-0 nostalgic_cray[95239]: --> passed data devices: 0 physical, 3 LVM
Nov 24 18:21:05 compute-0 nostalgic_cray[95239]: --> relative data size: 1.0
Nov 24 18:21:05 compute-0 nostalgic_cray[95239]: --> All data devices are unavailable
Nov 24 18:21:05 compute-0 systemd[1]: libpod-b2f510f59292c25bfcd886ba84b01e155d1cb27b400cc9d1561205fae83b6f9d.scope: Deactivated successfully.
Nov 24 18:21:05 compute-0 podman[95222]: 2025-11-24 18:21:05.41252428 +0000 UTC m=+1.120952841 container died b2f510f59292c25bfcd886ba84b01e155d1cb27b400cc9d1561205fae83b6f9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cray, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 18:21:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-010bf65b13ba035b054e7c05ee3ddeb6f840e5222014277e250b308c7dfa4fd5-merged.mount: Deactivated successfully.
Nov 24 18:21:05 compute-0 podman[95222]: 2025-11-24 18:21:05.463089487 +0000 UTC m=+1.171518048 container remove b2f510f59292c25bfcd886ba84b01e155d1cb27b400cc9d1561205fae83b6f9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cray, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 18:21:05 compute-0 systemd[1]: libpod-conmon-b2f510f59292c25bfcd886ba84b01e155d1cb27b400cc9d1561205fae83b6f9d.scope: Deactivated successfully.
Nov 24 18:21:05 compute-0 sudo[95116]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:05 compute-0 sudo[95279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:05 compute-0 sudo[95279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:05 compute-0 sudo[95279]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:05 compute-0 sudo[95304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:21:05 compute-0 sudo[95304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:05 compute-0 sudo[95304]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:05 compute-0 sudo[95329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:05 compute-0 sudo[95329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:05 compute-0 sudo[95329]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:05 compute-0 sudo[95354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- lvm list --format json
Nov 24 18:21:05 compute-0 sudo[95354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:05 compute-0 ceph-mon[74927]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:06 compute-0 podman[95418]: 2025-11-24 18:21:06.024826054 +0000 UTC m=+0.035359724 container create 95afbd2312215a22b626ea8a1b89c9b12a81fd9fba4b01697a8317ef51149d77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 24 18:21:06 compute-0 systemd[1]: Started libpod-conmon-95afbd2312215a22b626ea8a1b89c9b12a81fd9fba4b01697a8317ef51149d77.scope.
Nov 24 18:21:06 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:06 compute-0 podman[95418]: 2025-11-24 18:21:06.097306235 +0000 UTC m=+0.107839915 container init 95afbd2312215a22b626ea8a1b89c9b12a81fd9fba4b01697a8317ef51149d77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Nov 24 18:21:06 compute-0 podman[95418]: 2025-11-24 18:21:06.104082076 +0000 UTC m=+0.114615736 container start 95afbd2312215a22b626ea8a1b89c9b12a81fd9fba4b01697a8317ef51149d77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mclaren, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 18:21:06 compute-0 podman[95418]: 2025-11-24 18:21:06.009982729 +0000 UTC m=+0.020516389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:21:06 compute-0 podman[95418]: 2025-11-24 18:21:06.107913683 +0000 UTC m=+0.118447363 container attach 95afbd2312215a22b626ea8a1b89c9b12a81fd9fba4b01697a8317ef51149d77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:21:06 compute-0 hopeful_mclaren[95434]: 167 167
Nov 24 18:21:06 compute-0 systemd[1]: libpod-95afbd2312215a22b626ea8a1b89c9b12a81fd9fba4b01697a8317ef51149d77.scope: Deactivated successfully.
Nov 24 18:21:06 compute-0 podman[95418]: 2025-11-24 18:21:06.110471237 +0000 UTC m=+0.121004897 container died 95afbd2312215a22b626ea8a1b89c9b12a81fd9fba4b01697a8317ef51149d77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mclaren, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 18:21:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-16c4d55702ed76b2725fe157231d9dac0b6a2f1a170bda61a5552a35e8919c33-merged.mount: Deactivated successfully.
Nov 24 18:21:06 compute-0 podman[95418]: 2025-11-24 18:21:06.158010458 +0000 UTC m=+0.168544128 container remove 95afbd2312215a22b626ea8a1b89c9b12a81fd9fba4b01697a8317ef51149d77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mclaren, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 18:21:06 compute-0 systemd[1]: libpod-conmon-95afbd2312215a22b626ea8a1b89c9b12a81fd9fba4b01697a8317ef51149d77.scope: Deactivated successfully.
Nov 24 18:21:06 compute-0 podman[95457]: 2025-11-24 18:21:06.358573793 +0000 UTC m=+0.067954887 container create b55b6eefe8624fa420cd44f75cafc70535e6f7688df67472a5989558de552ff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hellman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Nov 24 18:21:06 compute-0 systemd[1]: Started libpod-conmon-b55b6eefe8624fa420cd44f75cafc70535e6f7688df67472a5989558de552ff2.scope.
Nov 24 18:21:06 compute-0 podman[95457]: 2025-11-24 18:21:06.332646848 +0000 UTC m=+0.042028032 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:21:06 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e0ef08dbded87291a0eda8cda47789cd5c51eb265042e97f8dfbbcc1f28c18c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e0ef08dbded87291a0eda8cda47789cd5c51eb265042e97f8dfbbcc1f28c18c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e0ef08dbded87291a0eda8cda47789cd5c51eb265042e97f8dfbbcc1f28c18c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e0ef08dbded87291a0eda8cda47789cd5c51eb265042e97f8dfbbcc1f28c18c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:06 compute-0 podman[95457]: 2025-11-24 18:21:06.446871733 +0000 UTC m=+0.156252867 container init b55b6eefe8624fa420cd44f75cafc70535e6f7688df67472a5989558de552ff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hellman, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:21:06 compute-0 podman[95457]: 2025-11-24 18:21:06.458049836 +0000 UTC m=+0.167430980 container start b55b6eefe8624fa420cd44f75cafc70535e6f7688df67472a5989558de552ff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 18:21:06 compute-0 podman[95457]: 2025-11-24 18:21:06.461734899 +0000 UTC m=+0.171116043 container attach b55b6eefe8624fa420cd44f75cafc70535e6f7688df67472a5989558de552ff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hellman, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 18:21:06 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]: {
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:     "0": [
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:         {
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:             "devices": [
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:                 "/dev/loop3"
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:             ],
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:             "lv_name": "ceph_lv0",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:             "lv_size": "21470642176",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:             "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:             "name": "ceph_lv0",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:             "tags": {
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:                 "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:                 "ceph.cluster_name": "ceph",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:                 "ceph.crush_device_class": "",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:                 "ceph.encrypted": "0",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:                 "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:                 "ceph.osd_id": "0",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:                 "ceph.type": "block",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:                 "ceph.vdo": "0"
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:             },
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:             "type": "block",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:             "vg_name": "ceph_vg0"
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:         }
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:     ],
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:     "1": [
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:         {
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:             "devices": [
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:                 "/dev/loop4"
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:             ],
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:             "lv_name": "ceph_lv1",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:             "lv_size": "21470642176",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:             "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:             "name": "ceph_lv1",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:             "tags": {
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:                 "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:                 "ceph.cluster_name": "ceph",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:                 "ceph.crush_device_class": "",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:                 "ceph.encrypted": "0",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:                 "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:                 "ceph.osd_id": "1",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:                 "ceph.type": "block",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:                 "ceph.vdo": "0"
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:             },
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:             "type": "block",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:             "vg_name": "ceph_vg1"
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:         }
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:     ],
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:     "2": [
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:         {
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:             "devices": [
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:                 "/dev/loop5"
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:             ],
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:             "lv_name": "ceph_lv2",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:             "lv_size": "21470642176",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:             "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:             "name": "ceph_lv2",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:             "tags": {
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:                 "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:                 "ceph.cluster_name": "ceph",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:                 "ceph.crush_device_class": "",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:                 "ceph.encrypted": "0",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:                 "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:                 "ceph.osd_id": "2",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:                 "ceph.type": "block",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:                 "ceph.vdo": "0"
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:             },
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:             "type": "block",
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:             "vg_name": "ceph_vg2"
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:         }
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]:     ]
Nov 24 18:21:07 compute-0 inspiring_hellman[95473]: }
Nov 24 18:21:07 compute-0 systemd[1]: libpod-b55b6eefe8624fa420cd44f75cafc70535e6f7688df67472a5989558de552ff2.scope: Deactivated successfully.
Nov 24 18:21:07 compute-0 podman[95457]: 2025-11-24 18:21:07.193041787 +0000 UTC m=+0.902422931 container died b55b6eefe8624fa420cd44f75cafc70535e6f7688df67472a5989558de552ff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:21:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e0ef08dbded87291a0eda8cda47789cd5c51eb265042e97f8dfbbcc1f28c18c-merged.mount: Deactivated successfully.
Nov 24 18:21:07 compute-0 podman[95457]: 2025-11-24 18:21:07.262306527 +0000 UTC m=+0.971687631 container remove b55b6eefe8624fa420cd44f75cafc70535e6f7688df67472a5989558de552ff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hellman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:21:07 compute-0 systemd[1]: libpod-conmon-b55b6eefe8624fa420cd44f75cafc70535e6f7688df67472a5989558de552ff2.scope: Deactivated successfully.
Nov 24 18:21:07 compute-0 sudo[95354]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:07 compute-0 sudo[95494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:07 compute-0 sudo[95494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:07 compute-0 sudo[95494]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:07 compute-0 sudo[95519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:21:07 compute-0 sudo[95519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:07 compute-0 sudo[95519]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e20 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:21:07 compute-0 sudo[95544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:07 compute-0 sudo[95544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:07 compute-0 sudo[95544]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:07 compute-0 sudo[95569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- raw list --format json
Nov 24 18:21:07 compute-0 sudo[95569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:07 compute-0 podman[95635]: 2025-11-24 18:21:07.935173481 +0000 UTC m=+0.043114250 container create 58d3f36c93b15aeed9502048d7e18af086de7c041c4f8809a45ee55b23330370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 18:21:07 compute-0 systemd[1]: Started libpod-conmon-58d3f36c93b15aeed9502048d7e18af086de7c041c4f8809a45ee55b23330370.scope.
Nov 24 18:21:07 compute-0 ceph-mon[74927]: pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:08 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:08 compute-0 podman[95635]: 2025-11-24 18:21:07.918029928 +0000 UTC m=+0.025970727 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:21:08 compute-0 podman[95635]: 2025-11-24 18:21:08.020185108 +0000 UTC m=+0.128125887 container init 58d3f36c93b15aeed9502048d7e18af086de7c041c4f8809a45ee55b23330370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_easley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 18:21:08 compute-0 podman[95635]: 2025-11-24 18:21:08.031158045 +0000 UTC m=+0.139098814 container start 58d3f36c93b15aeed9502048d7e18af086de7c041c4f8809a45ee55b23330370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_easley, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 18:21:08 compute-0 flamboyant_easley[95652]: 167 167
Nov 24 18:21:08 compute-0 systemd[1]: libpod-58d3f36c93b15aeed9502048d7e18af086de7c041c4f8809a45ee55b23330370.scope: Deactivated successfully.
Nov 24 18:21:08 compute-0 conmon[95652]: conmon 58d3f36c93b15aeed950 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-58d3f36c93b15aeed9502048d7e18af086de7c041c4f8809a45ee55b23330370.scope/container/memory.events
Nov 24 18:21:08 compute-0 podman[95635]: 2025-11-24 18:21:08.03610303 +0000 UTC m=+0.144043839 container attach 58d3f36c93b15aeed9502048d7e18af086de7c041c4f8809a45ee55b23330370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_easley, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:21:08 compute-0 podman[95635]: 2025-11-24 18:21:08.036645093 +0000 UTC m=+0.144585862 container died 58d3f36c93b15aeed9502048d7e18af086de7c041c4f8809a45ee55b23330370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 18:21:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c12203e357fc6c0ed7a7d6f2c7407f0e5f195b61493b9ce3108638177040464-merged.mount: Deactivated successfully.
Nov 24 18:21:08 compute-0 podman[95635]: 2025-11-24 18:21:08.067594555 +0000 UTC m=+0.175535324 container remove 58d3f36c93b15aeed9502048d7e18af086de7c041c4f8809a45ee55b23330370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:21:08 compute-0 systemd[1]: libpod-conmon-58d3f36c93b15aeed9502048d7e18af086de7c041c4f8809a45ee55b23330370.scope: Deactivated successfully.
Nov 24 18:21:08 compute-0 podman[95675]: 2025-11-24 18:21:08.265515374 +0000 UTC m=+0.048051585 container create 9db0e3c50c57a648af76b69fcbfee206a1fda059688499fa63f6233148f5c32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bassi, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:21:08 compute-0 systemd[1]: Started libpod-conmon-9db0e3c50c57a648af76b69fcbfee206a1fda059688499fa63f6233148f5c32d.scope.
Nov 24 18:21:08 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:08 compute-0 podman[95675]: 2025-11-24 18:21:08.245478678 +0000 UTC m=+0.028014889 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c69a0a748856eda226f3f2d162bbf77cbe11588456e04c5665f108851eaec41/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c69a0a748856eda226f3f2d162bbf77cbe11588456e04c5665f108851eaec41/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c69a0a748856eda226f3f2d162bbf77cbe11588456e04c5665f108851eaec41/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c69a0a748856eda226f3f2d162bbf77cbe11588456e04c5665f108851eaec41/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:08 compute-0 podman[95675]: 2025-11-24 18:21:08.363645742 +0000 UTC m=+0.146181933 container init 9db0e3c50c57a648af76b69fcbfee206a1fda059688499fa63f6233148f5c32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 18:21:08 compute-0 podman[95675]: 2025-11-24 18:21:08.369669404 +0000 UTC m=+0.152205565 container start 9db0e3c50c57a648af76b69fcbfee206a1fda059688499fa63f6233148f5c32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:21:08 compute-0 podman[95675]: 2025-11-24 18:21:08.373140132 +0000 UTC m=+0.155676303 container attach 9db0e3c50c57a648af76b69fcbfee206a1fda059688499fa63f6233148f5c32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bassi, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 18:21:08 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:09 compute-0 goofy_bassi[95692]: {
Nov 24 18:21:09 compute-0 goofy_bassi[95692]:     "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 18:21:09 compute-0 goofy_bassi[95692]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:21:09 compute-0 goofy_bassi[95692]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 18:21:09 compute-0 goofy_bassi[95692]:         "osd_id": 0,
Nov 24 18:21:09 compute-0 goofy_bassi[95692]:         "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:21:09 compute-0 goofy_bassi[95692]:         "type": "bluestore"
Nov 24 18:21:09 compute-0 goofy_bassi[95692]:     },
Nov 24 18:21:09 compute-0 goofy_bassi[95692]:     "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 18:21:09 compute-0 goofy_bassi[95692]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:21:09 compute-0 goofy_bassi[95692]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 18:21:09 compute-0 goofy_bassi[95692]:         "osd_id": 1,
Nov 24 18:21:09 compute-0 goofy_bassi[95692]:         "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:21:09 compute-0 goofy_bassi[95692]:         "type": "bluestore"
Nov 24 18:21:09 compute-0 goofy_bassi[95692]:     },
Nov 24 18:21:09 compute-0 goofy_bassi[95692]:     "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 18:21:09 compute-0 goofy_bassi[95692]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:21:09 compute-0 goofy_bassi[95692]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 18:21:09 compute-0 goofy_bassi[95692]:         "osd_id": 2,
Nov 24 18:21:09 compute-0 goofy_bassi[95692]:         "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:21:09 compute-0 goofy_bassi[95692]:         "type": "bluestore"
Nov 24 18:21:09 compute-0 goofy_bassi[95692]:     }
Nov 24 18:21:09 compute-0 goofy_bassi[95692]: }
Nov 24 18:21:09 compute-0 systemd[1]: libpod-9db0e3c50c57a648af76b69fcbfee206a1fda059688499fa63f6233148f5c32d.scope: Deactivated successfully.
Nov 24 18:21:09 compute-0 systemd[1]: libpod-9db0e3c50c57a648af76b69fcbfee206a1fda059688499fa63f6233148f5c32d.scope: Consumed 1.086s CPU time.
Nov 24 18:21:09 compute-0 podman[95675]: 2025-11-24 18:21:09.450483991 +0000 UTC m=+1.233020202 container died 9db0e3c50c57a648af76b69fcbfee206a1fda059688499fa63f6233148f5c32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bassi, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:21:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c69a0a748856eda226f3f2d162bbf77cbe11588456e04c5665f108851eaec41-merged.mount: Deactivated successfully.
Nov 24 18:21:09 compute-0 podman[95675]: 2025-11-24 18:21:09.500555986 +0000 UTC m=+1.283092147 container remove 9db0e3c50c57a648af76b69fcbfee206a1fda059688499fa63f6233148f5c32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bassi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 24 18:21:09 compute-0 systemd[1]: libpod-conmon-9db0e3c50c57a648af76b69fcbfee206a1fda059688499fa63f6233148f5c32d.scope: Deactivated successfully.
Nov 24 18:21:09 compute-0 sudo[95569]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:21:09 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:21:09 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:09 compute-0 sudo[95736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:09 compute-0 sudo[95736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:09 compute-0 sudo[95736]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:09 compute-0 sudo[95761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:21:09 compute-0 sudo[95761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:09 compute-0 sudo[95761]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:09 compute-0 ceph-mon[74927]: pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:09 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:09 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:10 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:10 compute-0 sudo[95809]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvrfoymznlvmvrlodjjdzyxkjfwxytaa ; /usr/bin/python3'
Nov 24 18:21:10 compute-0 sudo[95809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:21:10 compute-0 python3[95811]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:21:10 compute-0 podman[95813]: 2025-11-24 18:21:10.756287579 +0000 UTC m=+0.037680733 container create 29d351739942155f582fcfc897c0f027069c083447060bc9c41d6ceff5c7ec80 (image=quay.io/ceph/ceph:v18, name=thirsty_nightingale, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:21:10 compute-0 systemd[1]: Started libpod-conmon-29d351739942155f582fcfc897c0f027069c083447060bc9c41d6ceff5c7ec80.scope.
Nov 24 18:21:10 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eba66205735afddb23c8f355b0db5e251770ded4b29a3dfe87d1754783343b5f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eba66205735afddb23c8f355b0db5e251770ded4b29a3dfe87d1754783343b5f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eba66205735afddb23c8f355b0db5e251770ded4b29a3dfe87d1754783343b5f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:10 compute-0 podman[95813]: 2025-11-24 18:21:10.832204567 +0000 UTC m=+0.113597731 container init 29d351739942155f582fcfc897c0f027069c083447060bc9c41d6ceff5c7ec80 (image=quay.io/ceph/ceph:v18, name=thirsty_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:21:10 compute-0 podman[95813]: 2025-11-24 18:21:10.739817733 +0000 UTC m=+0.021210917 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:21:10 compute-0 podman[95813]: 2025-11-24 18:21:10.839120651 +0000 UTC m=+0.120513805 container start 29d351739942155f582fcfc897c0f027069c083447060bc9c41d6ceff5c7ec80 (image=quay.io/ceph/ceph:v18, name=thirsty_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:21:10 compute-0 podman[95813]: 2025-11-24 18:21:10.841710317 +0000 UTC m=+0.123103531 container attach 29d351739942155f582fcfc897c0f027069c083447060bc9c41d6ceff5c7ec80 (image=quay.io/ceph/ceph:v18, name=thirsty_nightingale, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:21:11 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 24 18:21:11 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2469430221' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 24 18:21:11 compute-0 thirsty_nightingale[95829]: 
Nov 24 18:21:11 compute-0 thirsty_nightingale[95829]: {"fsid":"e5ee928f-099b-569b-93c9-ecf025cbb50d","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":144,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":20,"num_osds":3,"num_up_osds":3,"osd_up_since":1764008452,"num_in_osds":3,"osd_in_since":1764008421,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":83410944,"bytes_avail":64328515584,"bytes_total":64411926528},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-24T18:20:36.466398+0000","services":{}},"progress_events":{}}
Nov 24 18:21:11 compute-0 systemd[1]: libpod-29d351739942155f582fcfc897c0f027069c083447060bc9c41d6ceff5c7ec80.scope: Deactivated successfully.
Nov 24 18:21:11 compute-0 podman[95813]: 2025-11-24 18:21:11.46783535 +0000 UTC m=+0.749228554 container died 29d351739942155f582fcfc897c0f027069c083447060bc9c41d6ceff5c7ec80 (image=quay.io/ceph/ceph:v18, name=thirsty_nightingale, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 18:21:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-eba66205735afddb23c8f355b0db5e251770ded4b29a3dfe87d1754783343b5f-merged.mount: Deactivated successfully.
Nov 24 18:21:11 compute-0 podman[95813]: 2025-11-24 18:21:11.514924059 +0000 UTC m=+0.796317203 container remove 29d351739942155f582fcfc897c0f027069c083447060bc9c41d6ceff5c7ec80 (image=quay.io/ceph/ceph:v18, name=thirsty_nightingale, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 18:21:11 compute-0 systemd[1]: libpod-conmon-29d351739942155f582fcfc897c0f027069c083447060bc9c41d6ceff5c7ec80.scope: Deactivated successfully.
Nov 24 18:21:11 compute-0 sudo[95809]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:11 compute-0 sudo[95891]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqvzctkcbtupnaejlrxwmcaljzeyunny ; /usr/bin/python3'
Nov 24 18:21:11 compute-0 sudo[95891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:21:11 compute-0 ceph-mon[74927]: pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:11 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2469430221' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 24 18:21:11 compute-0 python3[95893]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:21:12 compute-0 podman[95894]: 2025-11-24 18:21:12.060450767 +0000 UTC m=+0.044169097 container create eeba2335ba5a93b344444be81733a7ffa4e0f1b3d5a451a27aedd5f8d2d7d476 (image=quay.io/ceph/ceph:v18, name=amazing_brown, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:21:12 compute-0 systemd[1]: Started libpod-conmon-eeba2335ba5a93b344444be81733a7ffa4e0f1b3d5a451a27aedd5f8d2d7d476.scope.
Nov 24 18:21:12 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deb3cd6c062618adcafbc1f242b7c58e38107d94846553368b8fa24b2a904e96/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deb3cd6c062618adcafbc1f242b7c58e38107d94846553368b8fa24b2a904e96/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:12 compute-0 podman[95894]: 2025-11-24 18:21:12.123346875 +0000 UTC m=+0.107065195 container init eeba2335ba5a93b344444be81733a7ffa4e0f1b3d5a451a27aedd5f8d2d7d476 (image=quay.io/ceph/ceph:v18, name=amazing_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 18:21:12 compute-0 podman[95894]: 2025-11-24 18:21:12.130933077 +0000 UTC m=+0.114651397 container start eeba2335ba5a93b344444be81733a7ffa4e0f1b3d5a451a27aedd5f8d2d7d476 (image=quay.io/ceph/ceph:v18, name=amazing_brown, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 18:21:12 compute-0 podman[95894]: 2025-11-24 18:21:12.133977964 +0000 UTC m=+0.117696284 container attach eeba2335ba5a93b344444be81733a7ffa4e0f1b3d5a451a27aedd5f8d2d7d476 (image=quay.io/ceph/ceph:v18, name=amazing_brown, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:21:12 compute-0 podman[95894]: 2025-11-24 18:21:12.040574755 +0000 UTC m=+0.024293145 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:21:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e20 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:21:12 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 24 18:21:12 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1996595322' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 18:21:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Nov 24 18:21:12 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1996595322' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 18:21:13 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1996595322' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 18:21:13 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Nov 24 18:21:13 compute-0 amazing_brown[95909]: pool 'vms' created
Nov 24 18:21:13 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Nov 24 18:21:13 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 21 pg[2.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [2] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:13 compute-0 systemd[1]: libpod-eeba2335ba5a93b344444be81733a7ffa4e0f1b3d5a451a27aedd5f8d2d7d476.scope: Deactivated successfully.
Nov 24 18:21:13 compute-0 podman[95894]: 2025-11-24 18:21:13.030453265 +0000 UTC m=+1.014171605 container died eeba2335ba5a93b344444be81733a7ffa4e0f1b3d5a451a27aedd5f8d2d7d476 (image=quay.io/ceph/ceph:v18, name=amazing_brown, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:21:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-deb3cd6c062618adcafbc1f242b7c58e38107d94846553368b8fa24b2a904e96-merged.mount: Deactivated successfully.
Nov 24 18:21:13 compute-0 podman[95894]: 2025-11-24 18:21:13.07497296 +0000 UTC m=+1.058691280 container remove eeba2335ba5a93b344444be81733a7ffa4e0f1b3d5a451a27aedd5f8d2d7d476 (image=quay.io/ceph/ceph:v18, name=amazing_brown, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 24 18:21:13 compute-0 sudo[95891]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:13 compute-0 systemd[1]: libpod-conmon-eeba2335ba5a93b344444be81733a7ffa4e0f1b3d5a451a27aedd5f8d2d7d476.scope: Deactivated successfully.
Nov 24 18:21:13 compute-0 sudo[95971]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyyirrjsbmaebhgohesyyqutruzdxgmq ; /usr/bin/python3'
Nov 24 18:21:13 compute-0 sudo[95971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:21:13 compute-0 python3[95973]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:21:13 compute-0 podman[95974]: 2025-11-24 18:21:13.430349495 +0000 UTC m=+0.036085582 container create 897d79ec286f196995abba40f40692c83d6011803e01b10edb67b2935455c5bc (image=quay.io/ceph/ceph:v18, name=jolly_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:21:13 compute-0 systemd[1]: Started libpod-conmon-897d79ec286f196995abba40f40692c83d6011803e01b10edb67b2935455c5bc.scope.
Nov 24 18:21:13 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1dd6dfd42e1ac56a670eb6b1c45a42aa28429953edd5f9faa82590023960c37/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1dd6dfd42e1ac56a670eb6b1c45a42aa28429953edd5f9faa82590023960c37/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:13 compute-0 podman[95974]: 2025-11-24 18:21:13.496751982 +0000 UTC m=+0.102488069 container init 897d79ec286f196995abba40f40692c83d6011803e01b10edb67b2935455c5bc (image=quay.io/ceph/ceph:v18, name=jolly_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Nov 24 18:21:13 compute-0 podman[95974]: 2025-11-24 18:21:13.50183202 +0000 UTC m=+0.107568117 container start 897d79ec286f196995abba40f40692c83d6011803e01b10edb67b2935455c5bc (image=quay.io/ceph/ceph:v18, name=jolly_meninsky, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:21:13 compute-0 podman[95974]: 2025-11-24 18:21:13.505085713 +0000 UTC m=+0.110821820 container attach 897d79ec286f196995abba40f40692c83d6011803e01b10edb67b2935455c5bc (image=quay.io/ceph/ceph:v18, name=jolly_meninsky, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 24 18:21:13 compute-0 podman[95974]: 2025-11-24 18:21:13.414724981 +0000 UTC m=+0.020461098 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:21:14 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Nov 24 18:21:14 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Nov 24 18:21:14 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Nov 24 18:21:14 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 24 18:21:14 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3178470247' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 18:21:14 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 22 pg[2.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [2] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:14 compute-0 ceph-mon[74927]: pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:14 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1996595322' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 18:21:14 compute-0 ceph-mon[74927]: osdmap e21: 3 total, 3 up, 3 in
Nov 24 18:21:14 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v63: 2 pgs: 1 active+clean, 1 creating+peering; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:15 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Nov 24 18:21:15 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3178470247' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 18:21:15 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Nov 24 18:21:15 compute-0 jolly_meninsky[95990]: pool 'volumes' created
Nov 24 18:21:15 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Nov 24 18:21:15 compute-0 ceph-mon[74927]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 18:21:15 compute-0 ceph-mon[74927]: osdmap e22: 3 total, 3 up, 3 in
Nov 24 18:21:15 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3178470247' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 18:21:15 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3178470247' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 18:21:15 compute-0 ceph-mon[74927]: osdmap e23: 3 total, 3 up, 3 in
Nov 24 18:21:15 compute-0 systemd[1]: libpod-897d79ec286f196995abba40f40692c83d6011803e01b10edb67b2935455c5bc.scope: Deactivated successfully.
Nov 24 18:21:15 compute-0 conmon[95990]: conmon 897d79ec286f196995ab <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-897d79ec286f196995abba40f40692c83d6011803e01b10edb67b2935455c5bc.scope/container/memory.events
Nov 24 18:21:15 compute-0 podman[95974]: 2025-11-24 18:21:15.035008451 +0000 UTC m=+1.640744598 container died 897d79ec286f196995abba40f40692c83d6011803e01b10edb67b2935455c5bc (image=quay.io/ceph/ceph:v18, name=jolly_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 18:21:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1dd6dfd42e1ac56a670eb6b1c45a42aa28429953edd5f9faa82590023960c37-merged.mount: Deactivated successfully.
Nov 24 18:21:15 compute-0 podman[95974]: 2025-11-24 18:21:15.078718105 +0000 UTC m=+1.684454192 container remove 897d79ec286f196995abba40f40692c83d6011803e01b10edb67b2935455c5bc (image=quay.io/ceph/ceph:v18, name=jolly_meninsky, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 18:21:15 compute-0 sudo[95971]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:15 compute-0 systemd[1]: libpod-conmon-897d79ec286f196995abba40f40692c83d6011803e01b10edb67b2935455c5bc.scope: Deactivated successfully.
Nov 24 18:21:15 compute-0 sudo[96050]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjajoxeklavixahwrzluvnrjydkydwbw ; /usr/bin/python3'
Nov 24 18:21:15 compute-0 sudo[96050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:21:15 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 23 pg[3.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [1] r=0 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:15 compute-0 python3[96052]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:21:15 compute-0 podman[96053]: 2025-11-24 18:21:15.433368542 +0000 UTC m=+0.058256103 container create 5404e0ad74e3ab048bdbbf76ace392498b5f4e2aadea7fe1185b4fdb5a63362f (image=quay.io/ceph/ceph:v18, name=mystifying_chaplygin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:21:15 compute-0 systemd[1]: Started libpod-conmon-5404e0ad74e3ab048bdbbf76ace392498b5f4e2aadea7fe1185b4fdb5a63362f.scope.
Nov 24 18:21:15 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bed6ff73d5d86668ebbe880456fb93ded9b6f4e6279f0bbcfe64542ceba51a9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bed6ff73d5d86668ebbe880456fb93ded9b6f4e6279f0bbcfe64542ceba51a9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:15 compute-0 podman[96053]: 2025-11-24 18:21:15.415651754 +0000 UTC m=+0.040539285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:21:15 compute-0 podman[96053]: 2025-11-24 18:21:15.514655075 +0000 UTC m=+0.139542636 container init 5404e0ad74e3ab048bdbbf76ace392498b5f4e2aadea7fe1185b4fdb5a63362f (image=quay.io/ceph/ceph:v18, name=mystifying_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 18:21:15 compute-0 podman[96053]: 2025-11-24 18:21:15.524568855 +0000 UTC m=+0.149456416 container start 5404e0ad74e3ab048bdbbf76ace392498b5f4e2aadea7fe1185b4fdb5a63362f (image=quay.io/ceph/ceph:v18, name=mystifying_chaplygin, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:21:15 compute-0 podman[96053]: 2025-11-24 18:21:15.528083514 +0000 UTC m=+0.152971065 container attach 5404e0ad74e3ab048bdbbf76ace392498b5f4e2aadea7fe1185b4fdb5a63362f (image=quay.io/ceph/ceph:v18, name=mystifying_chaplygin, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 24 18:21:16 compute-0 ceph-mon[74927]: pgmap v63: 2 pgs: 1 active+clean, 1 creating+peering; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:16 compute-0 ceph-mon[74927]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 18:21:16 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 24 18:21:16 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2031348729' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 18:21:16 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Nov 24 18:21:16 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2031348729' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 18:21:16 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Nov 24 18:21:16 compute-0 mystifying_chaplygin[96068]: pool 'backups' created
Nov 24 18:21:16 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Nov 24 18:21:16 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 24 pg[3.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [1] r=0 lpr=23 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:16 compute-0 systemd[1]: libpod-5404e0ad74e3ab048bdbbf76ace392498b5f4e2aadea7fe1185b4fdb5a63362f.scope: Deactivated successfully.
Nov 24 18:21:16 compute-0 podman[96053]: 2025-11-24 18:21:16.074640298 +0000 UTC m=+0.699527819 container died 5404e0ad74e3ab048bdbbf76ace392498b5f4e2aadea7fe1185b4fdb5a63362f (image=quay.io/ceph/ceph:v18, name=mystifying_chaplygin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 18:21:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bed6ff73d5d86668ebbe880456fb93ded9b6f4e6279f0bbcfe64542ceba51a9-merged.mount: Deactivated successfully.
Nov 24 18:21:16 compute-0 podman[96053]: 2025-11-24 18:21:16.113009187 +0000 UTC m=+0.737896708 container remove 5404e0ad74e3ab048bdbbf76ace392498b5f4e2aadea7fe1185b4fdb5a63362f (image=quay.io/ceph/ceph:v18, name=mystifying_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:21:16 compute-0 systemd[1]: libpod-conmon-5404e0ad74e3ab048bdbbf76ace392498b5f4e2aadea7fe1185b4fdb5a63362f.scope: Deactivated successfully.
Nov 24 18:21:16 compute-0 sudo[96050]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:16 compute-0 sudo[96133]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-topwgucrhkhhpiyzrdlrqiancyxmimsq ; /usr/bin/python3'
Nov 24 18:21:16 compute-0 sudo[96133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:21:16 compute-0 python3[96135]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:21:16 compute-0 podman[96136]: 2025-11-24 18:21:16.427624272 +0000 UTC m=+0.044225287 container create 4d2cc798afd5c6f2e7e684db0057b46717616eed2d632d7e77106b326f2fdc94 (image=quay.io/ceph/ceph:v18, name=tender_sammet, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:21:16 compute-0 systemd[1]: Started libpod-conmon-4d2cc798afd5c6f2e7e684db0057b46717616eed2d632d7e77106b326f2fdc94.scope.
Nov 24 18:21:16 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v66: 4 pgs: 2 unknown, 1 active+clean, 1 creating+peering; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:16 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1ff59deefb2837c82b0083011b58cf74d13892df35d706ef5aac76dcb7d5c69/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1ff59deefb2837c82b0083011b58cf74d13892df35d706ef5aac76dcb7d5c69/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:16 compute-0 podman[96136]: 2025-11-24 18:21:16.409753141 +0000 UTC m=+0.026354186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:21:16 compute-0 podman[96136]: 2025-11-24 18:21:16.50792227 +0000 UTC m=+0.124523315 container init 4d2cc798afd5c6f2e7e684db0057b46717616eed2d632d7e77106b326f2fdc94 (image=quay.io/ceph/ceph:v18, name=tender_sammet, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:21:16 compute-0 podman[96136]: 2025-11-24 18:21:16.517984395 +0000 UTC m=+0.134585410 container start 4d2cc798afd5c6f2e7e684db0057b46717616eed2d632d7e77106b326f2fdc94 (image=quay.io/ceph/ceph:v18, name=tender_sammet, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 24 18:21:16 compute-0 podman[96136]: 2025-11-24 18:21:16.522215491 +0000 UTC m=+0.138816536 container attach 4d2cc798afd5c6f2e7e684db0057b46717616eed2d632d7e77106b326f2fdc94 (image=quay.io/ceph/ceph:v18, name=tender_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 18:21:16 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 24 pg[4.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [0] r=0 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:17 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2031348729' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 18:21:17 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2031348729' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 18:21:17 compute-0 ceph-mon[74927]: osdmap e24: 3 total, 3 up, 3 in
Nov 24 18:21:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Nov 24 18:21:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 24 18:21:17 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2454476534' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 18:21:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Nov 24 18:21:17 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Nov 24 18:21:17 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 25 pg[4.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [0] r=0 lpr=24 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e25 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:21:18 compute-0 ceph-mon[74927]: pgmap v66: 4 pgs: 2 unknown, 1 active+clean, 1 creating+peering; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:18 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2454476534' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 18:21:18 compute-0 ceph-mon[74927]: osdmap e25: 3 total, 3 up, 3 in
Nov 24 18:21:18 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Nov 24 18:21:18 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2454476534' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 18:21:18 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Nov 24 18:21:18 compute-0 tender_sammet[96151]: pool 'images' created
Nov 24 18:21:18 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Nov 24 18:21:18 compute-0 systemd[1]: libpod-4d2cc798afd5c6f2e7e684db0057b46717616eed2d632d7e77106b326f2fdc94.scope: Deactivated successfully.
Nov 24 18:21:18 compute-0 podman[96136]: 2025-11-24 18:21:18.092211513 +0000 UTC m=+1.708812518 container died 4d2cc798afd5c6f2e7e684db0057b46717616eed2d632d7e77106b326f2fdc94 (image=quay.io/ceph/ceph:v18, name=tender_sammet, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 24 18:21:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1ff59deefb2837c82b0083011b58cf74d13892df35d706ef5aac76dcb7d5c69-merged.mount: Deactivated successfully.
Nov 24 18:21:18 compute-0 podman[96136]: 2025-11-24 18:21:18.134628514 +0000 UTC m=+1.751229529 container remove 4d2cc798afd5c6f2e7e684db0057b46717616eed2d632d7e77106b326f2fdc94 (image=quay.io/ceph/ceph:v18, name=tender_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 18:21:18 compute-0 systemd[1]: libpod-conmon-4d2cc798afd5c6f2e7e684db0057b46717616eed2d632d7e77106b326f2fdc94.scope: Deactivated successfully.
Nov 24 18:21:18 compute-0 sudo[96133]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:18 compute-0 sudo[96214]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkfdurnotryfzckwcposvtnhknahyzby ; /usr/bin/python3'
Nov 24 18:21:18 compute-0 sudo[96214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:21:18 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 26 pg[5.0( empty local-lis/les=0/0 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [2] r=0 lpr=26 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:18 compute-0 python3[96216]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:21:18 compute-0 podman[96217]: 2025-11-24 18:21:18.469267735 +0000 UTC m=+0.039945829 container create 5dd8776412cfd3aedfa6662a2712596eb94b0da10e74d903118f587ee63feb5f (image=quay.io/ceph/ceph:v18, name=heuristic_ellis, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:21:18 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v69: 5 pgs: 3 unknown, 2 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:18 compute-0 systemd[1]: Started libpod-conmon-5dd8776412cfd3aedfa6662a2712596eb94b0da10e74d903118f587ee63feb5f.scope.
Nov 24 18:21:18 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48c848a5f7431f4241c98af994dcc950b88cce41eb764709556be9e39bf5c174/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48c848a5f7431f4241c98af994dcc950b88cce41eb764709556be9e39bf5c174/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:18 compute-0 podman[96217]: 2025-11-24 18:21:18.536592936 +0000 UTC m=+0.107271040 container init 5dd8776412cfd3aedfa6662a2712596eb94b0da10e74d903118f587ee63feb5f (image=quay.io/ceph/ceph:v18, name=heuristic_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:21:18 compute-0 podman[96217]: 2025-11-24 18:21:18.541785037 +0000 UTC m=+0.112463131 container start 5dd8776412cfd3aedfa6662a2712596eb94b0da10e74d903118f587ee63feb5f (image=quay.io/ceph/ceph:v18, name=heuristic_ellis, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:21:18 compute-0 podman[96217]: 2025-11-24 18:21:18.545119031 +0000 UTC m=+0.115797165 container attach 5dd8776412cfd3aedfa6662a2712596eb94b0da10e74d903118f587ee63feb5f (image=quay.io/ceph/ceph:v18, name=heuristic_ellis, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:21:18 compute-0 podman[96217]: 2025-11-24 18:21:18.454618465 +0000 UTC m=+0.025296579 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:21:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 24 18:21:19 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2751501735' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 18:21:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Nov 24 18:21:19 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2454476534' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 18:21:19 compute-0 ceph-mon[74927]: osdmap e26: 3 total, 3 up, 3 in
Nov 24 18:21:19 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2751501735' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 18:21:19 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2751501735' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 18:21:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Nov 24 18:21:19 compute-0 heuristic_ellis[96232]: pool 'cephfs.cephfs.meta' created
Nov 24 18:21:19 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Nov 24 18:21:19 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 27 pg[6.0( empty local-lis/les=0/0 n=0 ec=27/27 lis/c=0/0 les/c/f=0/0/0 sis=27) [0] r=0 lpr=27 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:19 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 27 pg[5.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [2] r=0 lpr=26 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:19 compute-0 systemd[1]: libpod-5dd8776412cfd3aedfa6662a2712596eb94b0da10e74d903118f587ee63feb5f.scope: Deactivated successfully.
Nov 24 18:21:19 compute-0 podman[96217]: 2025-11-24 18:21:19.101404631 +0000 UTC m=+0.672082725 container died 5dd8776412cfd3aedfa6662a2712596eb94b0da10e74d903118f587ee63feb5f (image=quay.io/ceph/ceph:v18, name=heuristic_ellis, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Nov 24 18:21:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-48c848a5f7431f4241c98af994dcc950b88cce41eb764709556be9e39bf5c174-merged.mount: Deactivated successfully.
Nov 24 18:21:19 compute-0 podman[96217]: 2025-11-24 18:21:19.14929533 +0000 UTC m=+0.719973424 container remove 5dd8776412cfd3aedfa6662a2712596eb94b0da10e74d903118f587ee63feb5f (image=quay.io/ceph/ceph:v18, name=heuristic_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 18:21:19 compute-0 systemd[1]: libpod-conmon-5dd8776412cfd3aedfa6662a2712596eb94b0da10e74d903118f587ee63feb5f.scope: Deactivated successfully.
Nov 24 18:21:19 compute-0 sudo[96214]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:19 compute-0 sudo[96293]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzecrpkpldwnlsnxnfebkeavpixlinex ; /usr/bin/python3'
Nov 24 18:21:19 compute-0 sudo[96293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:21:19 compute-0 python3[96295]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:21:19 compute-0 podman[96296]: 2025-11-24 18:21:19.431740534 +0000 UTC m=+0.037274933 container create 3ff12a3f1a0ae8ce3d2972993aae34a15d10cbf8e9e212c984248253fdba7301 (image=quay.io/ceph/ceph:v18, name=epic_chaum, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:21:19 compute-0 systemd[1]: Started libpod-conmon-3ff12a3f1a0ae8ce3d2972993aae34a15d10cbf8e9e212c984248253fdba7301.scope.
Nov 24 18:21:19 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62e44cc07a6c727f781e46413f86c3c22427b73494ab4d5c46ec4951356fd8c9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62e44cc07a6c727f781e46413f86c3c22427b73494ab4d5c46ec4951356fd8c9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:19 compute-0 podman[96296]: 2025-11-24 18:21:19.492993221 +0000 UTC m=+0.098527630 container init 3ff12a3f1a0ae8ce3d2972993aae34a15d10cbf8e9e212c984248253fdba7301 (image=quay.io/ceph/ceph:v18, name=epic_chaum, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:21:19 compute-0 podman[96296]: 2025-11-24 18:21:19.498466859 +0000 UTC m=+0.104001278 container start 3ff12a3f1a0ae8ce3d2972993aae34a15d10cbf8e9e212c984248253fdba7301 (image=quay.io/ceph/ceph:v18, name=epic_chaum, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:21:19 compute-0 podman[96296]: 2025-11-24 18:21:19.501437564 +0000 UTC m=+0.106971973 container attach 3ff12a3f1a0ae8ce3d2972993aae34a15d10cbf8e9e212c984248253fdba7301 (image=quay.io/ceph/ceph:v18, name=epic_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:21:19 compute-0 podman[96296]: 2025-11-24 18:21:19.41378102 +0000 UTC m=+0.019315449 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:21:20 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 24 18:21:20 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2554744173' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 18:21:20 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Nov 24 18:21:20 compute-0 ceph-mon[74927]: pgmap v69: 5 pgs: 3 unknown, 2 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:20 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2751501735' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 18:21:20 compute-0 ceph-mon[74927]: osdmap e27: 3 total, 3 up, 3 in
Nov 24 18:21:20 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2554744173' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 18:21:20 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2554744173' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 18:21:20 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Nov 24 18:21:20 compute-0 epic_chaum[96312]: pool 'cephfs.cephfs.data' created
Nov 24 18:21:20 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Nov 24 18:21:20 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 28 pg[6.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=0/0 les/c/f=0/0/0 sis=27) [0] r=0 lpr=27 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:20 compute-0 systemd[1]: libpod-3ff12a3f1a0ae8ce3d2972993aae34a15d10cbf8e9e212c984248253fdba7301.scope: Deactivated successfully.
Nov 24 18:21:20 compute-0 podman[96296]: 2025-11-24 18:21:20.118286323 +0000 UTC m=+0.723820722 container died 3ff12a3f1a0ae8ce3d2972993aae34a15d10cbf8e9e212c984248253fdba7301 (image=quay.io/ceph/ceph:v18, name=epic_chaum, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 24 18:21:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-62e44cc07a6c727f781e46413f86c3c22427b73494ab4d5c46ec4951356fd8c9-merged.mount: Deactivated successfully.
Nov 24 18:21:20 compute-0 podman[96296]: 2025-11-24 18:21:20.15223112 +0000 UTC m=+0.757765519 container remove 3ff12a3f1a0ae8ce3d2972993aae34a15d10cbf8e9e212c984248253fdba7301 (image=quay.io/ceph/ceph:v18, name=epic_chaum, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 18:21:20 compute-0 systemd[1]: libpod-conmon-3ff12a3f1a0ae8ce3d2972993aae34a15d10cbf8e9e212c984248253fdba7301.scope: Deactivated successfully.
Nov 24 18:21:20 compute-0 sudo[96293]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:20 compute-0 sudo[96374]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kewxnwlwevnlcovzspmgyiokaoulkrlu ; /usr/bin/python3'
Nov 24 18:21:20 compute-0 sudo[96374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:21:20 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 28 pg[7.0( empty local-lis/les=0/0 n=0 ec=28/28 lis/c=0/0 les/c/f=0/0/0 sis=28) [1] r=0 lpr=28 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:20 compute-0 python3[96376]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:21:20 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v72: 7 pgs: 3 unknown, 4 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:20 compute-0 podman[96377]: 2025-11-24 18:21:20.546750874 +0000 UTC m=+0.049065800 container create 404ed7573b96ef0ffcd5926e800208bca3b0c864e9bdd2a2094ed790378b6650 (image=quay.io/ceph/ceph:v18, name=infallible_lewin, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:21:20 compute-0 systemd[1]: Started libpod-conmon-404ed7573b96ef0ffcd5926e800208bca3b0c864e9bdd2a2094ed790378b6650.scope.
Nov 24 18:21:20 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:20 compute-0 podman[96377]: 2025-11-24 18:21:20.52363647 +0000 UTC m=+0.025951476 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:21:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bae5647939d50c0a37a86f96822a0b918ed4be01d34d1901fcd2c9e2a5410a2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bae5647939d50c0a37a86f96822a0b918ed4be01d34d1901fcd2c9e2a5410a2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:20 compute-0 podman[96377]: 2025-11-24 18:21:20.637688371 +0000 UTC m=+0.140003307 container init 404ed7573b96ef0ffcd5926e800208bca3b0c864e9bdd2a2094ed790378b6650 (image=quay.io/ceph/ceph:v18, name=infallible_lewin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:21:20 compute-0 podman[96377]: 2025-11-24 18:21:20.642918173 +0000 UTC m=+0.145233119 container start 404ed7573b96ef0ffcd5926e800208bca3b0c864e9bdd2a2094ed790378b6650 (image=quay.io/ceph/ceph:v18, name=infallible_lewin, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 24 18:21:20 compute-0 podman[96377]: 2025-11-24 18:21:20.646195556 +0000 UTC m=+0.148510502 container attach 404ed7573b96ef0ffcd5926e800208bca3b0c864e9bdd2a2094ed790378b6650 (image=quay.io/ceph/ceph:v18, name=infallible_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:21:21 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Nov 24 18:21:21 compute-0 ceph-mon[74927]: log_channel(cluster) log [WRN] : Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 18:21:21 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Nov 24 18:21:21 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Nov 24 18:21:21 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2554744173' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 18:21:21 compute-0 ceph-mon[74927]: osdmap e28: 3 total, 3 up, 3 in
Nov 24 18:21:21 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 29 pg[7.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=0/0 les/c/f=0/0/0 sis=28) [1] r=0 lpr=28 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:21 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Nov 24 18:21:21 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3201017558' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 24 18:21:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Nov 24 18:21:22 compute-0 ceph-mon[74927]: pgmap v72: 7 pgs: 3 unknown, 4 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:22 compute-0 ceph-mon[74927]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 18:21:22 compute-0 ceph-mon[74927]: osdmap e29: 3 total, 3 up, 3 in
Nov 24 18:21:22 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3201017558' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 24 18:21:22 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3201017558' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 24 18:21:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Nov 24 18:21:22 compute-0 infallible_lewin[96392]: enabled application 'rbd' on pool 'vms'
Nov 24 18:21:22 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Nov 24 18:21:22 compute-0 systemd[1]: libpod-404ed7573b96ef0ffcd5926e800208bca3b0c864e9bdd2a2094ed790378b6650.scope: Deactivated successfully.
Nov 24 18:21:22 compute-0 podman[96377]: 2025-11-24 18:21:22.152695283 +0000 UTC m=+1.655010249 container died 404ed7573b96ef0ffcd5926e800208bca3b0c864e9bdd2a2094ed790378b6650 (image=quay.io/ceph/ceph:v18, name=infallible_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:21:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-8bae5647939d50c0a37a86f96822a0b918ed4be01d34d1901fcd2c9e2a5410a2-merged.mount: Deactivated successfully.
Nov 24 18:21:22 compute-0 podman[96377]: 2025-11-24 18:21:22.200840288 +0000 UTC m=+1.703155214 container remove 404ed7573b96ef0ffcd5926e800208bca3b0c864e9bdd2a2094ed790378b6650 (image=quay.io/ceph/ceph:v18, name=infallible_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:21:22 compute-0 systemd[1]: libpod-conmon-404ed7573b96ef0ffcd5926e800208bca3b0c864e9bdd2a2094ed790378b6650.scope: Deactivated successfully.
Nov 24 18:21:22 compute-0 sudo[96374]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:22 compute-0 sudo[96454]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgozztyksmzdcattefuebcbuwixdnqnt ; /usr/bin/python3'
Nov 24 18:21:22 compute-0 sudo[96454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:21:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:21:22 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 3 unknown, 4 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:22 compute-0 python3[96456]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:21:22 compute-0 podman[96457]: 2025-11-24 18:21:22.60555585 +0000 UTC m=+0.076239917 container create 3debe000492041d9f8fe397e8fe3f437a0474937a992e06e1c78b51378a869a9 (image=quay.io/ceph/ceph:v18, name=awesome_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 24 18:21:22 compute-0 podman[96457]: 2025-11-24 18:21:22.552047908 +0000 UTC m=+0.022731985 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:21:22 compute-0 systemd[1]: Started libpod-conmon-3debe000492041d9f8fe397e8fe3f437a0474937a992e06e1c78b51378a869a9.scope.
Nov 24 18:21:22 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bc0b0b31940ae919bf7cbf65f5941e23c7062b78a0f6f0ce8fe86e8a0ad0796/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bc0b0b31940ae919bf7cbf65f5941e23c7062b78a0f6f0ce8fe86e8a0ad0796/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:22 compute-0 podman[96457]: 2025-11-24 18:21:22.706240683 +0000 UTC m=+0.176925100 container init 3debe000492041d9f8fe397e8fe3f437a0474937a992e06e1c78b51378a869a9 (image=quay.io/ceph/ceph:v18, name=awesome_antonelli, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 18:21:22 compute-0 podman[96457]: 2025-11-24 18:21:22.710953962 +0000 UTC m=+0.181638019 container start 3debe000492041d9f8fe397e8fe3f437a0474937a992e06e1c78b51378a869a9 (image=quay.io/ceph/ceph:v18, name=awesome_antonelli, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:21:22 compute-0 podman[96457]: 2025-11-24 18:21:22.713873796 +0000 UTC m=+0.184557903 container attach 3debe000492041d9f8fe397e8fe3f437a0474937a992e06e1c78b51378a869a9 (image=quay.io/ceph/ceph:v18, name=awesome_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Nov 24 18:21:23 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3201017558' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 24 18:21:23 compute-0 ceph-mon[74927]: osdmap e30: 3 total, 3 up, 3 in
Nov 24 18:21:23 compute-0 ceph-mon[74927]: pgmap v75: 7 pgs: 3 unknown, 4 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:23 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Nov 24 18:21:23 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1423363930' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 24 18:21:24 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Nov 24 18:21:24 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1423363930' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 24 18:21:24 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1423363930' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 24 18:21:24 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Nov 24 18:21:24 compute-0 awesome_antonelli[96472]: enabled application 'rbd' on pool 'volumes'
Nov 24 18:21:24 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Nov 24 18:21:24 compute-0 systemd[1]: libpod-3debe000492041d9f8fe397e8fe3f437a0474937a992e06e1c78b51378a869a9.scope: Deactivated successfully.
Nov 24 18:21:24 compute-0 podman[96457]: 2025-11-24 18:21:24.187334089 +0000 UTC m=+1.658018146 container died 3debe000492041d9f8fe397e8fe3f437a0474937a992e06e1c78b51378a869a9 (image=quay.io/ceph/ceph:v18, name=awesome_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 18:21:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-2bc0b0b31940ae919bf7cbf65f5941e23c7062b78a0f6f0ce8fe86e8a0ad0796-merged.mount: Deactivated successfully.
Nov 24 18:21:24 compute-0 podman[96457]: 2025-11-24 18:21:24.226002836 +0000 UTC m=+1.696686893 container remove 3debe000492041d9f8fe397e8fe3f437a0474937a992e06e1c78b51378a869a9 (image=quay.io/ceph/ceph:v18, name=awesome_antonelli, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:21:24 compute-0 systemd[1]: libpod-conmon-3debe000492041d9f8fe397e8fe3f437a0474937a992e06e1c78b51378a869a9.scope: Deactivated successfully.
Nov 24 18:21:24 compute-0 sudo[96454]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:24 compute-0 sudo[96532]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqoolhvxsjvorsgqjkutrrjquxdlefwl ; /usr/bin/python3'
Nov 24 18:21:24 compute-0 sudo[96532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:21:24 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:24 compute-0 python3[96534]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:21:24 compute-0 podman[96535]: 2025-11-24 18:21:24.566834513 +0000 UTC m=+0.056950229 container create 956c72f34b3fdef44d837e33293fd4334209f40f71b6a62c038f51b474808037 (image=quay.io/ceph/ceph:v18, name=modest_ritchie, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 18:21:24 compute-0 systemd[1]: Started libpod-conmon-956c72f34b3fdef44d837e33293fd4334209f40f71b6a62c038f51b474808037.scope.
Nov 24 18:21:24 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a49a799cd3b65b71d9897b75141c6f618af8421a2e86392fbbd54a371f5e2709/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a49a799cd3b65b71d9897b75141c6f618af8421a2e86392fbbd54a371f5e2709/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:24 compute-0 podman[96535]: 2025-11-24 18:21:24.634374318 +0000 UTC m=+0.124490034 container init 956c72f34b3fdef44d837e33293fd4334209f40f71b6a62c038f51b474808037 (image=quay.io/ceph/ceph:v18, name=modest_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 18:21:24 compute-0 podman[96535]: 2025-11-24 18:21:24.541910944 +0000 UTC m=+0.032026680 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:21:24 compute-0 podman[96535]: 2025-11-24 18:21:24.646445483 +0000 UTC m=+0.136561209 container start 956c72f34b3fdef44d837e33293fd4334209f40f71b6a62c038f51b474808037 (image=quay.io/ceph/ceph:v18, name=modest_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:21:24 compute-0 podman[96535]: 2025-11-24 18:21:24.650184918 +0000 UTC m=+0.140300634 container attach 956c72f34b3fdef44d837e33293fd4334209f40f71b6a62c038f51b474808037 (image=quay.io/ceph/ceph:v18, name=modest_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:21:25 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Nov 24 18:21:25 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3443822626' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 24 18:21:25 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Nov 24 18:21:25 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1423363930' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 24 18:21:25 compute-0 ceph-mon[74927]: osdmap e31: 3 total, 3 up, 3 in
Nov 24 18:21:25 compute-0 ceph-mon[74927]: pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:25 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3443822626' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 24 18:21:25 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3443822626' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 24 18:21:25 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Nov 24 18:21:25 compute-0 modest_ritchie[96550]: enabled application 'rbd' on pool 'backups'
Nov 24 18:21:25 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Nov 24 18:21:25 compute-0 systemd[1]: libpod-956c72f34b3fdef44d837e33293fd4334209f40f71b6a62c038f51b474808037.scope: Deactivated successfully.
Nov 24 18:21:25 compute-0 podman[96535]: 2025-11-24 18:21:25.193778097 +0000 UTC m=+0.683893863 container died 956c72f34b3fdef44d837e33293fd4334209f40f71b6a62c038f51b474808037 (image=quay.io/ceph/ceph:v18, name=modest_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 24 18:21:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-a49a799cd3b65b71d9897b75141c6f618af8421a2e86392fbbd54a371f5e2709-merged.mount: Deactivated successfully.
Nov 24 18:21:25 compute-0 podman[96535]: 2025-11-24 18:21:25.231591102 +0000 UTC m=+0.721706818 container remove 956c72f34b3fdef44d837e33293fd4334209f40f71b6a62c038f51b474808037 (image=quay.io/ceph/ceph:v18, name=modest_ritchie, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:21:25 compute-0 systemd[1]: libpod-conmon-956c72f34b3fdef44d837e33293fd4334209f40f71b6a62c038f51b474808037.scope: Deactivated successfully.
Nov 24 18:21:25 compute-0 sudo[96532]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:25 compute-0 sudo[96610]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnrsnxpubialiiwzlijdzzpcmvojsamx ; /usr/bin/python3'
Nov 24 18:21:25 compute-0 sudo[96610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:21:25 compute-0 python3[96612]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:21:25 compute-0 podman[96613]: 2025-11-24 18:21:25.540859372 +0000 UTC m=+0.034914632 container create 3a919e337e7d4f922b2b71338ee3b6022201313b5966d2e65ab9780a37406612 (image=quay.io/ceph/ceph:v18, name=relaxed_spence, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 18:21:25 compute-0 systemd[1]: Started libpod-conmon-3a919e337e7d4f922b2b71338ee3b6022201313b5966d2e65ab9780a37406612.scope.
Nov 24 18:21:25 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbaa7afdbb2ad284ac656a4327e1f257486979fb228c695e394824e6396d03dd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbaa7afdbb2ad284ac656a4327e1f257486979fb228c695e394824e6396d03dd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:25 compute-0 podman[96613]: 2025-11-24 18:21:25.608732097 +0000 UTC m=+0.102787357 container init 3a919e337e7d4f922b2b71338ee3b6022201313b5966d2e65ab9780a37406612 (image=quay.io/ceph/ceph:v18, name=relaxed_spence, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 24 18:21:25 compute-0 podman[96613]: 2025-11-24 18:21:25.615391575 +0000 UTC m=+0.109446835 container start 3a919e337e7d4f922b2b71338ee3b6022201313b5966d2e65ab9780a37406612 (image=quay.io/ceph/ceph:v18, name=relaxed_spence, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 24 18:21:25 compute-0 podman[96613]: 2025-11-24 18:21:25.618402321 +0000 UTC m=+0.112457611 container attach 3a919e337e7d4f922b2b71338ee3b6022201313b5966d2e65ab9780a37406612 (image=quay.io/ceph/ceph:v18, name=relaxed_spence, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:21:25 compute-0 podman[96613]: 2025-11-24 18:21:25.524813547 +0000 UTC m=+0.018868827 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:21:26 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Nov 24 18:21:26 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3223936195' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 24 18:21:26 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Nov 24 18:21:26 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3443822626' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 24 18:21:26 compute-0 ceph-mon[74927]: osdmap e32: 3 total, 3 up, 3 in
Nov 24 18:21:26 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3223936195' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 24 18:21:26 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3223936195' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 24 18:21:26 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Nov 24 18:21:26 compute-0 relaxed_spence[96629]: enabled application 'rbd' on pool 'images'
Nov 24 18:21:26 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Nov 24 18:21:26 compute-0 systemd[1]: libpod-3a919e337e7d4f922b2b71338ee3b6022201313b5966d2e65ab9780a37406612.scope: Deactivated successfully.
Nov 24 18:21:26 compute-0 podman[96654]: 2025-11-24 18:21:26.237077186 +0000 UTC m=+0.021477933 container died 3a919e337e7d4f922b2b71338ee3b6022201313b5966d2e65ab9780a37406612 (image=quay.io/ceph/ceph:v18, name=relaxed_spence, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 18:21:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbaa7afdbb2ad284ac656a4327e1f257486979fb228c695e394824e6396d03dd-merged.mount: Deactivated successfully.
Nov 24 18:21:26 compute-0 podman[96654]: 2025-11-24 18:21:26.283871978 +0000 UTC m=+0.068272695 container remove 3a919e337e7d4f922b2b71338ee3b6022201313b5966d2e65ab9780a37406612 (image=quay.io/ceph/ceph:v18, name=relaxed_spence, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:21:26 compute-0 systemd[1]: libpod-conmon-3a919e337e7d4f922b2b71338ee3b6022201313b5966d2e65ab9780a37406612.scope: Deactivated successfully.
Nov 24 18:21:26 compute-0 sudo[96610]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:26 compute-0 sudo[96693]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkfwmtvxfwzzzvdqazfzxsaxxzajneut ; /usr/bin/python3'
Nov 24 18:21:26 compute-0 sudo[96693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:21:26 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v80: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:26 compute-0 python3[96695]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:21:26 compute-0 podman[96696]: 2025-11-24 18:21:26.63555978 +0000 UTC m=+0.051118792 container create 270c24d2c5734502c65d6e34e2282ecb98e132f163d3bf269321bd9d7900c68a (image=quay.io/ceph/ceph:v18, name=romantic_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:21:26 compute-0 systemd[1]: Started libpod-conmon-270c24d2c5734502c65d6e34e2282ecb98e132f163d3bf269321bd9d7900c68a.scope.
Nov 24 18:21:26 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14bc025b1e99557b72e288e8996d24ff6af67e1b1397497a8ae87fb6aa36bd56/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14bc025b1e99557b72e288e8996d24ff6af67e1b1397497a8ae87fb6aa36bd56/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:26 compute-0 podman[96696]: 2025-11-24 18:21:26.694599301 +0000 UTC m=+0.110158333 container init 270c24d2c5734502c65d6e34e2282ecb98e132f163d3bf269321bd9d7900c68a (image=quay.io/ceph/ceph:v18, name=romantic_brahmagupta, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:21:26 compute-0 podman[96696]: 2025-11-24 18:21:26.699597577 +0000 UTC m=+0.115156609 container start 270c24d2c5734502c65d6e34e2282ecb98e132f163d3bf269321bd9d7900c68a (image=quay.io/ceph/ceph:v18, name=romantic_brahmagupta, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:21:26 compute-0 podman[96696]: 2025-11-24 18:21:26.702713306 +0000 UTC m=+0.118272338 container attach 270c24d2c5734502c65d6e34e2282ecb98e132f163d3bf269321bd9d7900c68a (image=quay.io/ceph/ceph:v18, name=romantic_brahmagupta, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 18:21:26 compute-0 podman[96696]: 2025-11-24 18:21:26.619524195 +0000 UTC m=+0.035083227 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:21:27 compute-0 ceph-mon[74927]: log_channel(cluster) log [WRN] : Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 18:21:27 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3223936195' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 24 18:21:27 compute-0 ceph-mon[74927]: osdmap e33: 3 total, 3 up, 3 in
Nov 24 18:21:27 compute-0 ceph-mon[74927]: pgmap v80: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Nov 24 18:21:27 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1036764824' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 24 18:21:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:21:28 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Nov 24 18:21:28 compute-0 ceph-mon[74927]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 18:21:28 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1036764824' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 24 18:21:28 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1036764824' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 24 18:21:28 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Nov 24 18:21:28 compute-0 romantic_brahmagupta[96711]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Nov 24 18:21:28 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Nov 24 18:21:28 compute-0 systemd[1]: libpod-270c24d2c5734502c65d6e34e2282ecb98e132f163d3bf269321bd9d7900c68a.scope: Deactivated successfully.
Nov 24 18:21:28 compute-0 podman[96736]: 2025-11-24 18:21:28.279316923 +0000 UTC m=+0.034935783 container died 270c24d2c5734502c65d6e34e2282ecb98e132f163d3bf269321bd9d7900c68a (image=quay.io/ceph/ceph:v18, name=romantic_brahmagupta, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 24 18:21:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-14bc025b1e99557b72e288e8996d24ff6af67e1b1397497a8ae87fb6aa36bd56-merged.mount: Deactivated successfully.
Nov 24 18:21:28 compute-0 podman[96736]: 2025-11-24 18:21:28.325807427 +0000 UTC m=+0.081426287 container remove 270c24d2c5734502c65d6e34e2282ecb98e132f163d3bf269321bd9d7900c68a (image=quay.io/ceph/ceph:v18, name=romantic_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 18:21:28 compute-0 systemd[1]: libpod-conmon-270c24d2c5734502c65d6e34e2282ecb98e132f163d3bf269321bd9d7900c68a.scope: Deactivated successfully.
Nov 24 18:21:28 compute-0 sudo[96693]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:28 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v82: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:28 compute-0 sudo[96774]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llizeudywtwhyqemwiaucghxvztlkrpi ; /usr/bin/python3'
Nov 24 18:21:28 compute-0 sudo[96774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:21:28 compute-0 python3[96776]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:21:28 compute-0 podman[96777]: 2025-11-24 18:21:28.665479926 +0000 UTC m=+0.035896068 container create 506e07d7ac00e386af3ea46c1b1d7a81c4e3431015f88f452048679205492d4b (image=quay.io/ceph/ceph:v18, name=friendly_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 24 18:21:28 compute-0 systemd[1]: Started libpod-conmon-506e07d7ac00e386af3ea46c1b1d7a81c4e3431015f88f452048679205492d4b.scope.
Nov 24 18:21:28 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e2d2ec248d43e20f18ccf920bc6083fb9e4fe5b440ed6c075c6cdabc30be9e6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e2d2ec248d43e20f18ccf920bc6083fb9e4fe5b440ed6c075c6cdabc30be9e6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:28 compute-0 podman[96777]: 2025-11-24 18:21:28.650003325 +0000 UTC m=+0.020419487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:21:28 compute-0 podman[96777]: 2025-11-24 18:21:28.752292128 +0000 UTC m=+0.122708350 container init 506e07d7ac00e386af3ea46c1b1d7a81c4e3431015f88f452048679205492d4b (image=quay.io/ceph/ceph:v18, name=friendly_williamson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:21:28 compute-0 podman[96777]: 2025-11-24 18:21:28.756983447 +0000 UTC m=+0.127399579 container start 506e07d7ac00e386af3ea46c1b1d7a81c4e3431015f88f452048679205492d4b (image=quay.io/ceph/ceph:v18, name=friendly_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Nov 24 18:21:28 compute-0 podman[96777]: 2025-11-24 18:21:28.760619519 +0000 UTC m=+0.131035691 container attach 506e07d7ac00e386af3ea46c1b1d7a81c4e3431015f88f452048679205492d4b (image=quay.io/ceph/ceph:v18, name=friendly_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:21:29 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1036764824' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 24 18:21:29 compute-0 ceph-mon[74927]: osdmap e34: 3 total, 3 up, 3 in
Nov 24 18:21:29 compute-0 ceph-mon[74927]: pgmap v82: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:29 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Nov 24 18:21:29 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/871528631' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 24 18:21:30 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Nov 24 18:21:30 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/871528631' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 24 18:21:30 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/871528631' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 24 18:21:30 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Nov 24 18:21:30 compute-0 friendly_williamson[96792]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Nov 24 18:21:30 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Nov 24 18:21:30 compute-0 systemd[1]: libpod-506e07d7ac00e386af3ea46c1b1d7a81c4e3431015f88f452048679205492d4b.scope: Deactivated successfully.
Nov 24 18:21:30 compute-0 podman[96777]: 2025-11-24 18:21:30.26091956 +0000 UTC m=+1.631335712 container died 506e07d7ac00e386af3ea46c1b1d7a81c4e3431015f88f452048679205492d4b (image=quay.io/ceph/ceph:v18, name=friendly_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 24 18:21:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e2d2ec248d43e20f18ccf920bc6083fb9e4fe5b440ed6c075c6cdabc30be9e6-merged.mount: Deactivated successfully.
Nov 24 18:21:30 compute-0 podman[96777]: 2025-11-24 18:21:30.295661138 +0000 UTC m=+1.666077280 container remove 506e07d7ac00e386af3ea46c1b1d7a81c4e3431015f88f452048679205492d4b (image=quay.io/ceph/ceph:v18, name=friendly_williamson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:21:30 compute-0 systemd[1]: libpod-conmon-506e07d7ac00e386af3ea46c1b1d7a81c4e3431015f88f452048679205492d4b.scope: Deactivated successfully.
Nov 24 18:21:30 compute-0 sudo[96774]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:30 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v84: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:31 compute-0 python3[96902]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 18:21:31 compute-0 ceph-mon[74927]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 24 18:21:31 compute-0 ceph-mon[74927]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 24 18:21:31 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/871528631' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 24 18:21:31 compute-0 ceph-mon[74927]: osdmap e35: 3 total, 3 up, 3 in
Nov 24 18:21:31 compute-0 ceph-mon[74927]: pgmap v84: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:31 compute-0 python3[96973]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764008490.9135487-36807-49611141934198/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:21:32 compute-0 sudo[97073]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oawxgszkpygnkcednzuyskoefkuosojq ; /usr/bin/python3'
Nov 24 18:21:32 compute-0 sudo[97073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:21:32 compute-0 ceph-mon[74927]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 24 18:21:32 compute-0 ceph-mon[74927]: Cluster is now healthy
Nov 24 18:21:32 compute-0 python3[97075]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 18:21:32 compute-0 sudo[97073]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:32 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:21:32 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v85: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:32 compute-0 sudo[97148]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozzazhhmeihmilutqwwlcspzfgbxruew ; /usr/bin/python3'
Nov 24 18:21:32 compute-0 sudo[97148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:21:32 compute-0 python3[97150]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764008491.9129853-36821-134738530745701/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=16f341f65597afdb4f6379924c5b911b6b6b7430 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:21:32 compute-0 sudo[97148]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:32 compute-0 sudo[97198]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgmnuybpruknjwhhilahqckyluflpabp ; /usr/bin/python3'
Nov 24 18:21:32 compute-0 sudo[97198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:21:33 compute-0 python3[97200]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:21:33 compute-0 podman[97201]: 2025-11-24 18:21:33.060218298 +0000 UTC m=+0.039131653 container create 102f3d550bee38263bfe35a79d1acf215a5f75a52437babda1fc4341f31e87b4 (image=quay.io/ceph/ceph:v18, name=agitated_darwin, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:21:33 compute-0 systemd[1]: Started libpod-conmon-102f3d550bee38263bfe35a79d1acf215a5f75a52437babda1fc4341f31e87b4.scope.
Nov 24 18:21:33 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae55e0299dd643136efcbe80a8295a22eebf9dd683f24cd83d8211559b52be77/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae55e0299dd643136efcbe80a8295a22eebf9dd683f24cd83d8211559b52be77/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae55e0299dd643136efcbe80a8295a22eebf9dd683f24cd83d8211559b52be77/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:33 compute-0 podman[97201]: 2025-11-24 18:21:33.131207501 +0000 UTC m=+0.110120886 container init 102f3d550bee38263bfe35a79d1acf215a5f75a52437babda1fc4341f31e87b4 (image=quay.io/ceph/ceph:v18, name=agitated_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 18:21:33 compute-0 podman[97201]: 2025-11-24 18:21:33.135673892 +0000 UTC m=+0.114587247 container start 102f3d550bee38263bfe35a79d1acf215a5f75a52437babda1fc4341f31e87b4 (image=quay.io/ceph/ceph:v18, name=agitated_darwin, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 18:21:33 compute-0 podman[97201]: 2025-11-24 18:21:33.04097201 +0000 UTC m=+0.019885395 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:21:33 compute-0 podman[97201]: 2025-11-24 18:21:33.138566804 +0000 UTC m=+0.117480159 container attach 102f3d550bee38263bfe35a79d1acf215a5f75a52437babda1fc4341f31e87b4 (image=quay.io/ceph/ceph:v18, name=agitated_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 18:21:33 compute-0 ceph-mon[74927]: pgmap v85: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:33 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 24 18:21:33 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4075813533' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 24 18:21:33 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4075813533' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 24 18:21:33 compute-0 agitated_darwin[97217]: 
Nov 24 18:21:33 compute-0 agitated_darwin[97217]: [global]
Nov 24 18:21:33 compute-0 agitated_darwin[97217]:         fsid = e5ee928f-099b-569b-93c9-ecf025cbb50d
Nov 24 18:21:33 compute-0 agitated_darwin[97217]:         mon_host = 192.168.122.100
Nov 24 18:21:33 compute-0 systemd[1]: libpod-102f3d550bee38263bfe35a79d1acf215a5f75a52437babda1fc4341f31e87b4.scope: Deactivated successfully.
Nov 24 18:21:33 compute-0 podman[97201]: 2025-11-24 18:21:33.671370483 +0000 UTC m=+0.650283848 container died 102f3d550bee38263bfe35a79d1acf215a5f75a52437babda1fc4341f31e87b4 (image=quay.io/ceph/ceph:v18, name=agitated_darwin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 24 18:21:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae55e0299dd643136efcbe80a8295a22eebf9dd683f24cd83d8211559b52be77-merged.mount: Deactivated successfully.
Nov 24 18:21:33 compute-0 podman[97201]: 2025-11-24 18:21:33.70788315 +0000 UTC m=+0.686796505 container remove 102f3d550bee38263bfe35a79d1acf215a5f75a52437babda1fc4341f31e87b4 (image=quay.io/ceph/ceph:v18, name=agitated_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 18:21:33 compute-0 systemd[1]: libpod-conmon-102f3d550bee38263bfe35a79d1acf215a5f75a52437babda1fc4341f31e87b4.scope: Deactivated successfully.
Nov 24 18:21:33 compute-0 sudo[97242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:33 compute-0 sudo[97242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:33 compute-0 sudo[97242]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:33 compute-0 sudo[97198]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:33 compute-0 sudo[97278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:21:33 compute-0 sudo[97278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:33 compute-0 sudo[97278]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:33 compute-0 sudo[97303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:33 compute-0 sudo[97303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:33 compute-0 sudo[97303]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:33 compute-0 sudo[97374]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkbnbzxrcxwdohmidsttxxfmfvqsunba ; /usr/bin/python3'
Nov 24 18:21:33 compute-0 sudo[97374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:21:33 compute-0 sudo[97330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 24 18:21:33 compute-0 sudo[97330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:33 compute-0 python3[97377]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:21:34 compute-0 podman[97379]: 2025-11-24 18:21:34.061753517 +0000 UTC m=+0.052208377 container create f6fa2bda071a4ce0a57d76dde4eb444964ecca162c2de13d2ffbe211e36fbc25 (image=quay.io/ceph/ceph:v18, name=nifty_dubinsky, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 18:21:34 compute-0 systemd[1]: Started libpod-conmon-f6fa2bda071a4ce0a57d76dde4eb444964ecca162c2de13d2ffbe211e36fbc25.scope.
Nov 24 18:21:34 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23b630ed71b5269f3bcc77e7357f59052b0d714d13bfcef1130b193770e6e05e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23b630ed71b5269f3bcc77e7357f59052b0d714d13bfcef1130b193770e6e05e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23b630ed71b5269f3bcc77e7357f59052b0d714d13bfcef1130b193770e6e05e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:34 compute-0 podman[97379]: 2025-11-24 18:21:34.044509929 +0000 UTC m=+0.034964789 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:21:34 compute-0 podman[97379]: 2025-11-24 18:21:34.159667968 +0000 UTC m=+0.150122828 container init f6fa2bda071a4ce0a57d76dde4eb444964ecca162c2de13d2ffbe211e36fbc25 (image=quay.io/ceph/ceph:v18, name=nifty_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 18:21:34 compute-0 podman[97379]: 2025-11-24 18:21:34.166564479 +0000 UTC m=+0.157019319 container start f6fa2bda071a4ce0a57d76dde4eb444964ecca162c2de13d2ffbe211e36fbc25 (image=quay.io/ceph/ceph:v18, name=nifty_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:21:34 compute-0 podman[97379]: 2025-11-24 18:21:34.169948333 +0000 UTC m=+0.160403193 container attach f6fa2bda071a4ce0a57d76dde4eb444964ecca162c2de13d2ffbe211e36fbc25 (image=quay.io/ceph/ceph:v18, name=nifty_dubinsky, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:21:34 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/4075813533' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 24 18:21:34 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/4075813533' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 24 18:21:34 compute-0 podman[97464]: 2025-11-24 18:21:34.424387291 +0000 UTC m=+0.050524665 container exec 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:21:34 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v86: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:21:34
Nov 24 18:21:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:21:34 compute-0 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 18:21:34 compute-0 ceph-mgr[75218]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'vms', 'backups', 'images', 'volumes', 'cephfs.cephfs.meta']
Nov 24 18:21:34 compute-0 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 18:21:34 compute-0 podman[97464]: 2025-11-24 18:21:34.529764388 +0000 UTC m=+0.155901752 container exec_died 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:21:34 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:21:34 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:21:34 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:21:34 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:21:34 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 24 18:21:34 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:21:34 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 24 18:21:34 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:21:34 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 24 18:21:34 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:21:34 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 24 18:21:34 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:21:34 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 24 18:21:34 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:21:34 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 24 18:21:34 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Nov 24 18:21:34 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 18:21:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:21:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:21:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:21:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:21:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:21:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:21:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:21:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:21:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:21:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:21:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:21:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:21:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:21:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:21:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:21:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:21:34 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Nov 24 18:21:34 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1665243259' entity='client.admin' 
Nov 24 18:21:34 compute-0 nifty_dubinsky[97417]: set ssl_option
Nov 24 18:21:34 compute-0 systemd[1]: libpod-f6fa2bda071a4ce0a57d76dde4eb444964ecca162c2de13d2ffbe211e36fbc25.scope: Deactivated successfully.
Nov 24 18:21:34 compute-0 podman[97379]: 2025-11-24 18:21:34.813243897 +0000 UTC m=+0.803698737 container died f6fa2bda071a4ce0a57d76dde4eb444964ecca162c2de13d2ffbe211e36fbc25 (image=quay.io/ceph/ceph:v18, name=nifty_dubinsky, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 18:21:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-23b630ed71b5269f3bcc77e7357f59052b0d714d13bfcef1130b193770e6e05e-merged.mount: Deactivated successfully.
Nov 24 18:21:34 compute-0 podman[97379]: 2025-11-24 18:21:34.853005864 +0000 UTC m=+0.843460704 container remove f6fa2bda071a4ce0a57d76dde4eb444964ecca162c2de13d2ffbe211e36fbc25 (image=quay.io/ceph/ceph:v18, name=nifty_dubinsky, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:21:34 compute-0 systemd[1]: libpod-conmon-f6fa2bda071a4ce0a57d76dde4eb444964ecca162c2de13d2ffbe211e36fbc25.scope: Deactivated successfully.
Nov 24 18:21:34 compute-0 sudo[97374]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:34 compute-0 sudo[97330]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:34 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:21:35 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:35 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:21:35 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:35 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:21:35 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:21:35 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:21:35 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:21:35 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:21:35 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:35 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev d08a0c24-e797-4c03-ae7e-a8d0c69d3730 does not exist
Nov 24 18:21:35 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 30f0eaec-d29c-475c-9ba6-c8d6b26d874b does not exist
Nov 24 18:21:35 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 6a28b409-90a2-4665-9d64-0f54ca9cfdab does not exist
Nov 24 18:21:35 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 18:21:35 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:21:35 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 18:21:35 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:21:35 compute-0 sudo[97642]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qerjjbdrbyehumtcjiwrdlimvvscavnp ; /usr/bin/python3'
Nov 24 18:21:35 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:21:35 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:21:35 compute-0 sudo[97642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:21:35 compute-0 sudo[97645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:35 compute-0 sudo[97645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:35 compute-0 sudo[97645]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:35 compute-0 sudo[97670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:21:35 compute-0 sudo[97670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:35 compute-0 sudo[97670]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:35 compute-0 python3[97644]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:21:35 compute-0 sudo[97695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:35 compute-0 sudo[97695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:35 compute-0 sudo[97695]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:35 compute-0 podman[97719]: 2025-11-24 18:21:35.261945599 +0000 UTC m=+0.038064677 container create 67a9ecf47ff818502d6ab6cc0a7e8bc8c83f9ca53f66f13c71292066d57cd0e5 (image=quay.io/ceph/ceph:v18, name=hungry_bassi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 18:21:35 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Nov 24 18:21:35 compute-0 sudo[97722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 18:21:35 compute-0 ceph-mon[74927]: pgmap v86: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:35 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 18:21:35 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1665243259' entity='client.admin' 
Nov 24 18:21:35 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:35 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:35 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:21:35 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:21:35 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:35 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:21:35 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:21:35 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:21:35 compute-0 sudo[97722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:35 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 24 18:21:35 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Nov 24 18:21:35 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Nov 24 18:21:35 compute-0 ceph-mgr[75218]: [progress INFO root] update: starting ev ed1dd8bb-0ade-4f15-b635-ab212938f2fe (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 24 18:21:35 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Nov 24 18:21:35 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 18:21:35 compute-0 systemd[1]: Started libpod-conmon-67a9ecf47ff818502d6ab6cc0a7e8bc8c83f9ca53f66f13c71292066d57cd0e5.scope.
Nov 24 18:21:35 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/997d7a3446af536534d49453f0aa2f3b65a92fcb22ff089e21d3be4e1ed75cba/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/997d7a3446af536534d49453f0aa2f3b65a92fcb22ff089e21d3be4e1ed75cba/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/997d7a3446af536534d49453f0aa2f3b65a92fcb22ff089e21d3be4e1ed75cba/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:35 compute-0 podman[97719]: 2025-11-24 18:21:35.339480213 +0000 UTC m=+0.115599311 container init 67a9ecf47ff818502d6ab6cc0a7e8bc8c83f9ca53f66f13c71292066d57cd0e5 (image=quay.io/ceph/ceph:v18, name=hungry_bassi, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 24 18:21:35 compute-0 podman[97719]: 2025-11-24 18:21:35.24668672 +0000 UTC m=+0.022805808 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:21:35 compute-0 podman[97719]: 2025-11-24 18:21:35.345948063 +0000 UTC m=+0.122067141 container start 67a9ecf47ff818502d6ab6cc0a7e8bc8c83f9ca53f66f13c71292066d57cd0e5 (image=quay.io/ceph/ceph:v18, name=hungry_bassi, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:21:35 compute-0 podman[97719]: 2025-11-24 18:21:35.348684161 +0000 UTC m=+0.124803239 container attach 67a9ecf47ff818502d6ab6cc0a7e8bc8c83f9ca53f66f13c71292066d57cd0e5 (image=quay.io/ceph/ceph:v18, name=hungry_bassi, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:21:35 compute-0 podman[97803]: 2025-11-24 18:21:35.5822171 +0000 UTC m=+0.040412604 container create b2b544daddfc90baf3fccd14a7ddfa8c74744fae55a384d0098050b03375d249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_rhodes, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 24 18:21:35 compute-0 systemd[1]: Started libpod-conmon-b2b544daddfc90baf3fccd14a7ddfa8c74744fae55a384d0098050b03375d249.scope.
Nov 24 18:21:35 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:35 compute-0 podman[97803]: 2025-11-24 18:21:35.645509582 +0000 UTC m=+0.103705086 container init b2b544daddfc90baf3fccd14a7ddfa8c74744fae55a384d0098050b03375d249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_rhodes, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 18:21:35 compute-0 podman[97803]: 2025-11-24 18:21:35.650721941 +0000 UTC m=+0.108917435 container start b2b544daddfc90baf3fccd14a7ddfa8c74744fae55a384d0098050b03375d249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:21:35 compute-0 podman[97803]: 2025-11-24 18:21:35.653758507 +0000 UTC m=+0.111954031 container attach b2b544daddfc90baf3fccd14a7ddfa8c74744fae55a384d0098050b03375d249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_rhodes, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 18:21:35 compute-0 jolly_rhodes[97818]: 167 167
Nov 24 18:21:35 compute-0 systemd[1]: libpod-b2b544daddfc90baf3fccd14a7ddfa8c74744fae55a384d0098050b03375d249.scope: Deactivated successfully.
Nov 24 18:21:35 compute-0 podman[97803]: 2025-11-24 18:21:35.655030908 +0000 UTC m=+0.113226412 container died b2b544daddfc90baf3fccd14a7ddfa8c74744fae55a384d0098050b03375d249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_rhodes, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:21:35 compute-0 podman[97803]: 2025-11-24 18:21:35.56568185 +0000 UTC m=+0.023877384 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:21:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-56c39e3a35d63ba7d0a0792f04bc7b95d1083c2548199e79c24c175ffbcb2792-merged.mount: Deactivated successfully.
Nov 24 18:21:35 compute-0 podman[97803]: 2025-11-24 18:21:35.686378427 +0000 UTC m=+0.144573921 container remove b2b544daddfc90baf3fccd14a7ddfa8c74744fae55a384d0098050b03375d249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:21:35 compute-0 systemd[1]: libpod-conmon-b2b544daddfc90baf3fccd14a7ddfa8c74744fae55a384d0098050b03375d249.scope: Deactivated successfully.
Nov 24 18:21:35 compute-0 podman[97862]: 2025-11-24 18:21:35.846137613 +0000 UTC m=+0.050023783 container create f50666e6f2cc17d306a03b9c8d8988856d7461776adca9a4ae504319596d6388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 18:21:35 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14246 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:21:35 compute-0 ceph-mgr[75218]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Nov 24 18:21:35 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Nov 24 18:21:35 compute-0 systemd[1]: Started libpod-conmon-f50666e6f2cc17d306a03b9c8d8988856d7461776adca9a4ae504319596d6388.scope.
Nov 24 18:21:35 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 24 18:21:35 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:35 compute-0 hungry_bassi[97760]: Scheduled rgw.rgw update...
Nov 24 18:21:35 compute-0 podman[97719]: 2025-11-24 18:21:35.915986238 +0000 UTC m=+0.692105356 container died 67a9ecf47ff818502d6ab6cc0a7e8bc8c83f9ca53f66f13c71292066d57cd0e5 (image=quay.io/ceph/ceph:v18, name=hungry_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:21:35 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:35 compute-0 systemd[1]: libpod-67a9ecf47ff818502d6ab6cc0a7e8bc8c83f9ca53f66f13c71292066d57cd0e5.scope: Deactivated successfully.
Nov 24 18:21:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c23b481e2f588c59e78f9cbdc47449b5ec69523cc16487750f505dc46227f91/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c23b481e2f588c59e78f9cbdc47449b5ec69523cc16487750f505dc46227f91/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c23b481e2f588c59e78f9cbdc47449b5ec69523cc16487750f505dc46227f91/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c23b481e2f588c59e78f9cbdc47449b5ec69523cc16487750f505dc46227f91/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:35 compute-0 podman[97862]: 2025-11-24 18:21:35.831381137 +0000 UTC m=+0.035267337 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:21:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c23b481e2f588c59e78f9cbdc47449b5ec69523cc16487750f505dc46227f91/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:35 compute-0 podman[97862]: 2025-11-24 18:21:35.940396034 +0000 UTC m=+0.144282224 container init f50666e6f2cc17d306a03b9c8d8988856d7461776adca9a4ae504319596d6388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_einstein, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:21:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-997d7a3446af536534d49453f0aa2f3b65a92fcb22ff089e21d3be4e1ed75cba-merged.mount: Deactivated successfully.
Nov 24 18:21:35 compute-0 podman[97862]: 2025-11-24 18:21:35.948990357 +0000 UTC m=+0.152876537 container start f50666e6f2cc17d306a03b9c8d8988856d7461776adca9a4ae504319596d6388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_einstein, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 18:21:35 compute-0 podman[97862]: 2025-11-24 18:21:35.953238933 +0000 UTC m=+0.157125133 container attach f50666e6f2cc17d306a03b9c8d8988856d7461776adca9a4ae504319596d6388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_einstein, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 24 18:21:35 compute-0 podman[97719]: 2025-11-24 18:21:35.967862786 +0000 UTC m=+0.743981864 container remove 67a9ecf47ff818502d6ab6cc0a7e8bc8c83f9ca53f66f13c71292066d57cd0e5 (image=quay.io/ceph/ceph:v18, name=hungry_bassi, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Nov 24 18:21:35 compute-0 systemd[1]: libpod-conmon-67a9ecf47ff818502d6ab6cc0a7e8bc8c83f9ca53f66f13c71292066d57cd0e5.scope: Deactivated successfully.
Nov 24 18:21:35 compute-0 sudo[97642]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:36 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Nov 24 18:21:36 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 24 18:21:36 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Nov 24 18:21:36 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Nov 24 18:21:36 compute-0 ceph-mgr[75218]: [progress INFO root] update: starting ev ce48b40b-ae73-4c57-b147-b639b9742672 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 24 18:21:36 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Nov 24 18:21:36 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 18:21:36 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 24 18:21:36 compute-0 ceph-mon[74927]: osdmap e36: 3 total, 3 up, 3 in
Nov 24 18:21:36 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 18:21:36 compute-0 ceph-mon[74927]: from='client.14246 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:21:36 compute-0 ceph-mon[74927]: Saving service rgw.rgw spec with placement compute-0
Nov 24 18:21:36 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:36 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v89: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:36 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 24 18:21:36 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 18:21:36 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 24 18:21:36 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 18:21:36 compute-0 python3[97983]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 18:21:36 compute-0 boring_einstein[97879]: --> passed data devices: 0 physical, 3 LVM
Nov 24 18:21:36 compute-0 boring_einstein[97879]: --> relative data size: 1.0
Nov 24 18:21:36 compute-0 boring_einstein[97879]: --> All data devices are unavailable
Nov 24 18:21:37 compute-0 systemd[1]: libpod-f50666e6f2cc17d306a03b9c8d8988856d7461776adca9a4ae504319596d6388.scope: Deactivated successfully.
Nov 24 18:21:37 compute-0 systemd[1]: libpod-f50666e6f2cc17d306a03b9c8d8988856d7461776adca9a4ae504319596d6388.scope: Consumed 1.019s CPU time.
Nov 24 18:21:37 compute-0 podman[97862]: 2025-11-24 18:21:37.01835265 +0000 UTC m=+1.222238830 container died f50666e6f2cc17d306a03b9c8d8988856d7461776adca9a4ae504319596d6388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_einstein, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:21:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c23b481e2f588c59e78f9cbdc47449b5ec69523cc16487750f505dc46227f91-merged.mount: Deactivated successfully.
Nov 24 18:21:37 compute-0 podman[97862]: 2025-11-24 18:21:37.068556007 +0000 UTC m=+1.272442177 container remove f50666e6f2cc17d306a03b9c8d8988856d7461776adca9a4ae504319596d6388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_einstein, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:21:37 compute-0 systemd[1]: libpod-conmon-f50666e6f2cc17d306a03b9c8d8988856d7461776adca9a4ae504319596d6388.scope: Deactivated successfully.
Nov 24 18:21:37 compute-0 sudo[97722]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:37 compute-0 sudo[98062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:37 compute-0 sudo[98062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:37 compute-0 sudo[98062]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:37 compute-0 sudo[98104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:21:37 compute-0 sudo[98104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:37 compute-0 sudo[98104]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:37 compute-0 python3[98098]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764008496.6042538-36862-181779954170689/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:21:37 compute-0 sudo[98129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:37 compute-0 sudo[98129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:37 compute-0 sudo[98129]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Nov 24 18:21:37 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 24 18:21:37 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 18:21:37 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 18:21:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Nov 24 18:21:37 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Nov 24 18:21:37 compute-0 ceph-mgr[75218]: [progress INFO root] update: starting ev 21277009-2326-4879-8df2-e125e4065fb1 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 24 18:21:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Nov 24 18:21:37 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 18:21:37 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 38 pg[3.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=38 pruub=10.759669304s) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active pruub 67.664543152s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:37 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 38 pg[3.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=38 pruub=10.759669304s) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown pruub 67.664543152s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:37 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 24 18:21:37 compute-0 ceph-mon[74927]: osdmap e37: 3 total, 3 up, 3 in
Nov 24 18:21:37 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 18:21:37 compute-0 ceph-mon[74927]: pgmap v89: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:37 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 18:21:37 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 18:21:37 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 24 18:21:37 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 18:21:37 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 18:21:37 compute-0 ceph-mon[74927]: osdmap e38: 3 total, 3 up, 3 in
Nov 24 18:21:37 compute-0 sudo[98154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- lvm list --format json
Nov 24 18:21:37 compute-0 sudo[98154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:21:37 compute-0 sudo[98264]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjkymqbwowwbaiirjisxvjpefmxcicbd ; /usr/bin/python3'
Nov 24 18:21:37 compute-0 sudo[98264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:21:37 compute-0 podman[98269]: 2025-11-24 18:21:37.650254251 +0000 UTC m=+0.045368828 container create 889b288ebba50f35fea72f0703d50588abf71a018af62076f58214d0e7deece0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:21:37 compute-0 systemd[1]: Started libpod-conmon-889b288ebba50f35fea72f0703d50588abf71a018af62076f58214d0e7deece0.scope.
Nov 24 18:21:37 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:37 compute-0 podman[98269]: 2025-11-24 18:21:37.715638765 +0000 UTC m=+0.110753342 container init 889b288ebba50f35fea72f0703d50588abf71a018af62076f58214d0e7deece0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 18:21:37 compute-0 podman[98269]: 2025-11-24 18:21:37.723670434 +0000 UTC m=+0.118785001 container start 889b288ebba50f35fea72f0703d50588abf71a018af62076f58214d0e7deece0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 18:21:37 compute-0 podman[98269]: 2025-11-24 18:21:37.726032423 +0000 UTC m=+0.121146990 container attach 889b288ebba50f35fea72f0703d50588abf71a018af62076f58214d0e7deece0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 24 18:21:37 compute-0 podman[98269]: 2025-11-24 18:21:37.629767012 +0000 UTC m=+0.024881629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:21:37 compute-0 systemd[1]: libpod-889b288ebba50f35fea72f0703d50588abf71a018af62076f58214d0e7deece0.scope: Deactivated successfully.
Nov 24 18:21:37 compute-0 xenodochial_boyd[98286]: 167 167
Nov 24 18:21:37 compute-0 conmon[98286]: conmon 889b288ebba50f35fea7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-889b288ebba50f35fea72f0703d50588abf71a018af62076f58214d0e7deece0.scope/container/memory.events
Nov 24 18:21:37 compute-0 podman[98269]: 2025-11-24 18:21:37.729653943 +0000 UTC m=+0.124768510 container died 889b288ebba50f35fea72f0703d50588abf71a018af62076f58214d0e7deece0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_boyd, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:21:37 compute-0 python3[98268]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:21:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-70abd7b423c3db90603f3514dbfb1dc74b060546d5efebab079f905a3c5e55f8-merged.mount: Deactivated successfully.
Nov 24 18:21:37 compute-0 podman[98269]: 2025-11-24 18:21:37.806389198 +0000 UTC m=+0.201503765 container remove 889b288ebba50f35fea72f0703d50588abf71a018af62076f58214d0e7deece0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_boyd, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 18:21:37 compute-0 systemd[1]: libpod-conmon-889b288ebba50f35fea72f0703d50588abf71a018af62076f58214d0e7deece0.scope: Deactivated successfully.
Nov 24 18:21:37 compute-0 podman[98292]: 2025-11-24 18:21:37.847763985 +0000 UTC m=+0.085809001 container create 368585f5c544ac87a94a3aa46ed80732abb505072e3359e338acea5a8132a647 (image=quay.io/ceph/ceph:v18, name=peaceful_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:21:37 compute-0 systemd[1]: Started libpod-conmon-368585f5c544ac87a94a3aa46ed80732abb505072e3359e338acea5a8132a647.scope.
Nov 24 18:21:37 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a0bf44b6b97d6a0975662db4effe849eaa7897d67fceabd93bfd78785b4d3e6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a0bf44b6b97d6a0975662db4effe849eaa7897d67fceabd93bfd78785b4d3e6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a0bf44b6b97d6a0975662db4effe849eaa7897d67fceabd93bfd78785b4d3e6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:37 compute-0 podman[98292]: 2025-11-24 18:21:37.904035343 +0000 UTC m=+0.142080419 container init 368585f5c544ac87a94a3aa46ed80732abb505072e3359e338acea5a8132a647 (image=quay.io/ceph/ceph:v18, name=peaceful_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 24 18:21:37 compute-0 podman[98292]: 2025-11-24 18:21:37.911841516 +0000 UTC m=+0.149886512 container start 368585f5c544ac87a94a3aa46ed80732abb505072e3359e338acea5a8132a647 (image=quay.io/ceph/ceph:v18, name=peaceful_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 18:21:37 compute-0 podman[98292]: 2025-11-24 18:21:37.815693519 +0000 UTC m=+0.053738545 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:21:37 compute-0 podman[98292]: 2025-11-24 18:21:37.91520974 +0000 UTC m=+0.153254806 container attach 368585f5c544ac87a94a3aa46ed80732abb505072e3359e338acea5a8132a647 (image=quay.io/ceph/ceph:v18, name=peaceful_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 18:21:37 compute-0 podman[98327]: 2025-11-24 18:21:37.957663954 +0000 UTC m=+0.042026834 container create 0e4463cfd6dea56fac23dc07b92a21f340b211fe3d083fd2dfe9fc258169a641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 24 18:21:37 compute-0 systemd[1]: Started libpod-conmon-0e4463cfd6dea56fac23dc07b92a21f340b211fe3d083fd2dfe9fc258169a641.scope.
Nov 24 18:21:38 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51f032e7a6cc6f7ae647a48093488b8acba761ae38f45e3e83e1b7aeb9c193c8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51f032e7a6cc6f7ae647a48093488b8acba761ae38f45e3e83e1b7aeb9c193c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51f032e7a6cc6f7ae647a48093488b8acba761ae38f45e3e83e1b7aeb9c193c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51f032e7a6cc6f7ae647a48093488b8acba761ae38f45e3e83e1b7aeb9c193c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:38 compute-0 podman[98327]: 2025-11-24 18:21:37.942073107 +0000 UTC m=+0.026436017 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:21:38 compute-0 podman[98327]: 2025-11-24 18:21:38.057179025 +0000 UTC m=+0.141541975 container init 0e4463cfd6dea56fac23dc07b92a21f340b211fe3d083fd2dfe9fc258169a641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wozniak, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:21:38 compute-0 podman[98327]: 2025-11-24 18:21:38.065209705 +0000 UTC m=+0.149572605 container start 0e4463cfd6dea56fac23dc07b92a21f340b211fe3d083fd2dfe9fc258169a641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 18:21:38 compute-0 podman[98327]: 2025-11-24 18:21:38.068110097 +0000 UTC m=+0.152473007 container attach 0e4463cfd6dea56fac23dc07b92a21f340b211fe3d083fd2dfe9fc258169a641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:21:38 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Nov 24 18:21:38 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 24 18:21:38 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Nov 24 18:21:38 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Nov 24 18:21:38 compute-0 ceph-mgr[75218]: [progress INFO root] update: starting ev 304c0f2e-06cd-431f-aba9-c7d965d83074 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 24 18:21:38 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0) v1
Nov 24 18:21:38 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.1f( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.1e( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.1c( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.1d( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.a( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.1b( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.9( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.8( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.7( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.6( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.5( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.3( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.1( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.4( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.2( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.b( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.c( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.e( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.d( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.f( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.10( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.11( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.12( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.13( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.14( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.15( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.16( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.17( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.18( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.1a( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.19( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:38 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 18:21:38 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 24 18:21:38 compute-0 ceph-mon[74927]: osdmap e39: 3 total, 3 up, 3 in
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.1f( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.1c( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.1e( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.1d( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.a( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.9( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.1b( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.8( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.7( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.6( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.5( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.3( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.0( empty local-lis/les=38/39 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.4( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.2( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.1( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.f( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.c( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.e( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.b( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.10( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.11( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.d( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.12( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.15( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.13( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.14( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.17( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.18( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.16( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.19( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:38 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.1a( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:38 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:21:38 compute-0 ceph-mgr[75218]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 24 18:21:38 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Nov 24 18:21:38 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 24 18:21:38 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Nov 24 18:21:38 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 24 18:21:38 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Nov 24 18:21:38 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 24 18:21:38 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Nov 24 18:21:38 compute-0 ceph-mon[74927]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 24 18:21:38 compute-0 ceph-mon[74927]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 24 18:21:38 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0[74923]: 2025-11-24T18:21:38.430+0000 7f94aeb23640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 24 18:21:38 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 24 18:21:38 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).mds e2 new map
Nov 24 18:21:38 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-24T18:21:38.431267+0000
                                           modified        2025-11-24T18:21:38.431323+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
Nov 24 18:21:38 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Nov 24 18:21:38 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Nov 24 18:21:38 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Nov 24 18:21:38 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Nov 24 18:21:38 compute-0 ceph-mgr[75218]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Nov 24 18:21:38 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Nov 24 18:21:38 compute-0 ceph-mgr[75218]: [progress INFO root] update: starting ev 288afbf3-be1e-45d6-9c0a-4665b2bdc2d8 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Nov 24 18:21:38 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 24 18:21:38 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Nov 24 18:21:38 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 18:21:38 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:38 compute-0 ceph-mgr[75218]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 24 18:21:38 compute-0 systemd[1]: libpod-368585f5c544ac87a94a3aa46ed80732abb505072e3359e338acea5a8132a647.scope: Deactivated successfully.
Nov 24 18:21:38 compute-0 podman[98292]: 2025-11-24 18:21:38.472709933 +0000 UTC m=+0.710754949 container died 368585f5c544ac87a94a3aa46ed80732abb505072e3359e338acea5a8132a647 (image=quay.io/ceph/ceph:v18, name=peaceful_heisenberg, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:21:38 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v93: 69 pgs: 62 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:38 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 24 18:21:38 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 18:21:38 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 24 18:21:38 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 18:21:38 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 24 18:21:38 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 18:21:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a0bf44b6b97d6a0975662db4effe849eaa7897d67fceabd93bfd78785b4d3e6-merged.mount: Deactivated successfully.
Nov 24 18:21:38 compute-0 podman[98292]: 2025-11-24 18:21:38.516296355 +0000 UTC m=+0.754341361 container remove 368585f5c544ac87a94a3aa46ed80732abb505072e3359e338acea5a8132a647 (image=quay.io/ceph/ceph:v18, name=peaceful_heisenberg, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 18:21:38 compute-0 systemd[1]: libpod-conmon-368585f5c544ac87a94a3aa46ed80732abb505072e3359e338acea5a8132a647.scope: Deactivated successfully.
Nov 24 18:21:38 compute-0 sudo[98264]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:38 compute-0 sudo[98406]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptrnupcodmjllbvjhpzrrqhluuladzcz ; /usr/bin/python3'
Nov 24 18:21:38 compute-0 sudo[98406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]: {
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:     "0": [
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:         {
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:             "devices": [
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:                 "/dev/loop3"
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:             ],
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:             "lv_name": "ceph_lv0",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:             "lv_size": "21470642176",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:             "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:             "name": "ceph_lv0",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:             "tags": {
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:                 "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:                 "ceph.cluster_name": "ceph",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:                 "ceph.crush_device_class": "",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:                 "ceph.encrypted": "0",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:                 "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:                 "ceph.osd_id": "0",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:                 "ceph.type": "block",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:                 "ceph.vdo": "0"
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:             },
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:             "type": "block",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:             "vg_name": "ceph_vg0"
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:         }
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:     ],
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:     "1": [
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:         {
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:             "devices": [
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:                 "/dev/loop4"
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:             ],
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:             "lv_name": "ceph_lv1",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:             "lv_size": "21470642176",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:             "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:             "name": "ceph_lv1",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:             "tags": {
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:                 "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:                 "ceph.cluster_name": "ceph",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:                 "ceph.crush_device_class": "",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:                 "ceph.encrypted": "0",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:                 "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:                 "ceph.osd_id": "1",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:                 "ceph.type": "block",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:                 "ceph.vdo": "0"
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:             },
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:             "type": "block",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:             "vg_name": "ceph_vg1"
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:         }
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:     ],
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:     "2": [
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:         {
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:             "devices": [
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:                 "/dev/loop5"
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:             ],
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:             "lv_name": "ceph_lv2",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:             "lv_size": "21470642176",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:             "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:             "name": "ceph_lv2",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:             "tags": {
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:                 "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:                 "ceph.cluster_name": "ceph",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:                 "ceph.crush_device_class": "",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:                 "ceph.encrypted": "0",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:                 "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:                 "ceph.osd_id": "2",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:                 "ceph.type": "block",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:                 "ceph.vdo": "0"
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:             },
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:             "type": "block",
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:             "vg_name": "ceph_vg2"
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:         }
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]:     ]
Nov 24 18:21:38 compute-0 gifted_wozniak[98346]: }
Nov 24 18:21:38 compute-0 python3[98408]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:21:38 compute-0 systemd[1]: libpod-0e4463cfd6dea56fac23dc07b92a21f340b211fe3d083fd2dfe9fc258169a641.scope: Deactivated successfully.
Nov 24 18:21:38 compute-0 podman[98327]: 2025-11-24 18:21:38.838990988 +0000 UTC m=+0.923353888 container died 0e4463cfd6dea56fac23dc07b92a21f340b211fe3d083fd2dfe9fc258169a641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wozniak, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 18:21:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-51f032e7a6cc6f7ae647a48093488b8acba761ae38f45e3e83e1b7aeb9c193c8-merged.mount: Deactivated successfully.
Nov 24 18:21:38 compute-0 podman[98413]: 2025-11-24 18:21:38.889697036 +0000 UTC m=+0.051236312 container create 0ae5bfdf2163437a0526b58ae48385831bc36385da002291643a7329577f0e7e (image=quay.io/ceph/ceph:v18, name=awesome_shtern, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 24 18:21:38 compute-0 podman[98327]: 2025-11-24 18:21:38.89668022 +0000 UTC m=+0.981043110 container remove 0e4463cfd6dea56fac23dc07b92a21f340b211fe3d083fd2dfe9fc258169a641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wozniak, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:21:38 compute-0 systemd[1]: libpod-conmon-0e4463cfd6dea56fac23dc07b92a21f340b211fe3d083fd2dfe9fc258169a641.scope: Deactivated successfully.
Nov 24 18:21:38 compute-0 sudo[98154]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:38 compute-0 systemd[1]: Started libpod-conmon-0ae5bfdf2163437a0526b58ae48385831bc36385da002291643a7329577f0e7e.scope.
Nov 24 18:21:38 compute-0 podman[98413]: 2025-11-24 18:21:38.869587137 +0000 UTC m=+0.031126433 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:21:38 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5033a9e6e9d69bf6334452aaedf643d3530a720ceab0c32773d2f5403e3dad4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5033a9e6e9d69bf6334452aaedf643d3530a720ceab0c32773d2f5403e3dad4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5033a9e6e9d69bf6334452aaedf643d3530a720ceab0c32773d2f5403e3dad4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:38 compute-0 podman[98413]: 2025-11-24 18:21:38.97883234 +0000 UTC m=+0.140371676 container init 0ae5bfdf2163437a0526b58ae48385831bc36385da002291643a7329577f0e7e (image=quay.io/ceph/ceph:v18, name=awesome_shtern, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 18:21:38 compute-0 podman[98413]: 2025-11-24 18:21:38.985497255 +0000 UTC m=+0.147036531 container start 0ae5bfdf2163437a0526b58ae48385831bc36385da002291643a7329577f0e7e (image=quay.io/ceph/ceph:v18, name=awesome_shtern, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 24 18:21:38 compute-0 podman[98413]: 2025-11-24 18:21:38.988585052 +0000 UTC m=+0.150124318 container attach 0ae5bfdf2163437a0526b58ae48385831bc36385da002291643a7329577f0e7e (image=quay.io/ceph/ceph:v18, name=awesome_shtern, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 24 18:21:38 compute-0 sudo[98439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:38 compute-0 sudo[98439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:38 compute-0 sudo[98439]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:39 compute-0 sudo[98468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:21:39 compute-0 sudo[98468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:39 compute-0 sudo[98468]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:39 compute-0 sudo[98493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:39 compute-0 sudo[98493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:39 compute-0 sudo[98493]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:39 compute-0 sudo[98518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- raw list --format json
Nov 24 18:21:39 compute-0 sudo[98518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:39 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 18:21:39 compute-0 ceph-mon[74927]: from='client.14248 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:21:39 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 24 18:21:39 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 24 18:21:39 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 24 18:21:39 compute-0 ceph-mon[74927]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 24 18:21:39 compute-0 ceph-mon[74927]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 24 18:21:39 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 24 18:21:39 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Nov 24 18:21:39 compute-0 ceph-mon[74927]: osdmap e40: 3 total, 3 up, 3 in
Nov 24 18:21:39 compute-0 ceph-mon[74927]: fsmap cephfs:0
Nov 24 18:21:39 compute-0 ceph-mon[74927]: Saving service mds.cephfs spec with placement compute-0
Nov 24 18:21:39 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 18:21:39 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:39 compute-0 ceph-mon[74927]: pgmap v93: 69 pgs: 62 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:39 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 18:21:39 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 18:21:39 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 18:21:39 compute-0 podman[98600]: 2025-11-24 18:21:39.416066576 +0000 UTC m=+0.034919848 container create 1505b29270d3d32179b5fcf28716171c194563609926ca4c8d2d1bdec533c67b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:21:39 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Nov 24 18:21:39 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 24 18:21:39 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 18:21:39 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 18:21:39 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 18:21:39 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Nov 24 18:21:39 compute-0 systemd[1]: Started libpod-conmon-1505b29270d3d32179b5fcf28716171c194563609926ca4c8d2d1bdec533c67b.scope.
Nov 24 18:21:39 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Nov 24 18:21:39 compute-0 ceph-mgr[75218]: [progress INFO root] update: starting ev 033bfeea-2045-48ad-993c-b77cee9df009 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 24 18:21:39 compute-0 ceph-mgr[75218]: [progress INFO root] complete: finished ev ed1dd8bb-0ade-4f15-b635-ab212938f2fe (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 24 18:21:39 compute-0 ceph-mgr[75218]: [progress INFO root] Completed event ed1dd8bb-0ade-4f15-b635-ab212938f2fe (PG autoscaler increasing pool 2 PGs from 1 to 32) in 4 seconds
Nov 24 18:21:39 compute-0 ceph-mgr[75218]: [progress INFO root] complete: finished ev ce48b40b-ae73-4c57-b147-b639b9742672 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 24 18:21:39 compute-0 ceph-mgr[75218]: [progress INFO root] Completed event ce48b40b-ae73-4c57-b147-b639b9742672 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 3 seconds
Nov 24 18:21:39 compute-0 ceph-mgr[75218]: [progress INFO root] complete: finished ev 21277009-2326-4879-8df2-e125e4065fb1 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 24 18:21:39 compute-0 ceph-mgr[75218]: [progress INFO root] Completed event 21277009-2326-4879-8df2-e125e4065fb1 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 2 seconds
Nov 24 18:21:39 compute-0 ceph-mgr[75218]: [progress INFO root] complete: finished ev 304c0f2e-06cd-431f-aba9-c7d965d83074 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 24 18:21:39 compute-0 ceph-mgr[75218]: [progress INFO root] Completed event 304c0f2e-06cd-431f-aba9-c7d965d83074 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 1 seconds
Nov 24 18:21:39 compute-0 ceph-mgr[75218]: [progress INFO root] complete: finished ev 288afbf3-be1e-45d6-9c0a-4665b2bdc2d8 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Nov 24 18:21:39 compute-0 ceph-mgr[75218]: [progress INFO root] Completed event 288afbf3-be1e-45d6-9c0a-4665b2bdc2d8 (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Nov 24 18:21:39 compute-0 ceph-mgr[75218]: [progress INFO root] complete: finished ev 033bfeea-2045-48ad-993c-b77cee9df009 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 24 18:21:39 compute-0 ceph-mgr[75218]: [progress INFO root] Completed event 033bfeea-2045-48ad-993c-b77cee9df009 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Nov 24 18:21:39 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 38 pg[2.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=38 pruub=14.567813873s) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active pruub 66.103454590s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:39 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=38 pruub=14.567813873s) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown pruub 66.103454590s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:39 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.1( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:39 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.2( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:39 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.3( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:39 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.6( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:39 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.7( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:39 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.8( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:39 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.9( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:39 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.1a( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:39 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.1b( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:39 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.4( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:39 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.5( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:39 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.1e( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:39 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.1f( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:39 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.1c( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:39 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.1d( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:39 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.18( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:39 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.19( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:39 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.c( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:39 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.d( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:39 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.a( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:39 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.b( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:39 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.12( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:39 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.13( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:39 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.16( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:39 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.17( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:39 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.e( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:39 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.f( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:39 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.10( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:39 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.14( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:39 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.11( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:39 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.15( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:39 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=41 pruub=11.635890961s) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active pruub 63.178226471s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:39 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=41 pruub=11.635890961s) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown pruub 63.178226471s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:39 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:39 compute-0 podman[98600]: 2025-11-24 18:21:39.487162862 +0000 UTC m=+0.106016134 container init 1505b29270d3d32179b5fcf28716171c194563609926ca4c8d2d1bdec533c67b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hoover, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 24 18:21:39 compute-0 podman[98600]: 2025-11-24 18:21:39.49231561 +0000 UTC m=+0.111168862 container start 1505b29270d3d32179b5fcf28716171c194563609926ca4c8d2d1bdec533c67b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 18:21:39 compute-0 podman[98600]: 2025-11-24 18:21:39.494838882 +0000 UTC m=+0.113692154 container attach 1505b29270d3d32179b5fcf28716171c194563609926ca4c8d2d1bdec533c67b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hoover, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Nov 24 18:21:39 compute-0 amazing_hoover[98617]: 167 167
Nov 24 18:21:39 compute-0 podman[98600]: 2025-11-24 18:21:39.401012102 +0000 UTC m=+0.019865384 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:21:39 compute-0 systemd[1]: libpod-1505b29270d3d32179b5fcf28716171c194563609926ca4c8d2d1bdec533c67b.scope: Deactivated successfully.
Nov 24 18:21:39 compute-0 podman[98600]: 2025-11-24 18:21:39.496783201 +0000 UTC m=+0.115636453 container died 1505b29270d3d32179b5fcf28716171c194563609926ca4c8d2d1bdec533c67b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hoover, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 18:21:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-b54a4f060a0f9cabcfd0c82dc97826166b39e979247fe3465851d1bd89533f60-merged.mount: Deactivated successfully.
Nov 24 18:21:39 compute-0 podman[98600]: 2025-11-24 18:21:39.524570771 +0000 UTC m=+0.143424023 container remove 1505b29270d3d32179b5fcf28716171c194563609926ca4c8d2d1bdec533c67b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hoover, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:21:39 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14250 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:21:39 compute-0 ceph-mgr[75218]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Nov 24 18:21:39 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Nov 24 18:21:39 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 24 18:21:39 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:39 compute-0 awesome_shtern[98443]: Scheduled mds.cephfs update...
Nov 24 18:21:39 compute-0 systemd[1]: libpod-conmon-1505b29270d3d32179b5fcf28716171c194563609926ca4c8d2d1bdec533c67b.scope: Deactivated successfully.
Nov 24 18:21:39 compute-0 systemd[1]: libpod-0ae5bfdf2163437a0526b58ae48385831bc36385da002291643a7329577f0e7e.scope: Deactivated successfully.
Nov 24 18:21:39 compute-0 podman[98413]: 2025-11-24 18:21:39.551063888 +0000 UTC m=+0.712603164 container died 0ae5bfdf2163437a0526b58ae48385831bc36385da002291643a7329577f0e7e (image=quay.io/ceph/ceph:v18, name=awesome_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 24 18:21:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5033a9e6e9d69bf6334452aaedf643d3530a720ceab0c32773d2f5403e3dad4-merged.mount: Deactivated successfully.
Nov 24 18:21:39 compute-0 podman[98413]: 2025-11-24 18:21:39.589880412 +0000 UTC m=+0.751419688 container remove 0ae5bfdf2163437a0526b58ae48385831bc36385da002291643a7329577f0e7e (image=quay.io/ceph/ceph:v18, name=awesome_shtern, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 18:21:39 compute-0 systemd[1]: libpod-conmon-0ae5bfdf2163437a0526b58ae48385831bc36385da002291643a7329577f0e7e.scope: Deactivated successfully.
Nov 24 18:21:39 compute-0 sudo[98406]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:39 compute-0 podman[98652]: 2025-11-24 18:21:39.66226444 +0000 UTC m=+0.035903553 container create 74917c36395d7e0e24d5281427c606412e895134cc003063629189df8c87a529 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:21:39 compute-0 systemd[1]: Started libpod-conmon-74917c36395d7e0e24d5281427c606412e895134cc003063629189df8c87a529.scope.
Nov 24 18:21:39 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:39 compute-0 ceph-mgr[75218]: [progress INFO root] Writing back 9 completed events
Nov 24 18:21:39 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 24 18:21:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09acff362b97eea7c684db1d5cbaf565f6b57f559b918bd40175000e1e11a036/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09acff362b97eea7c684db1d5cbaf565f6b57f559b918bd40175000e1e11a036/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09acff362b97eea7c684db1d5cbaf565f6b57f559b918bd40175000e1e11a036/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09acff362b97eea7c684db1d5cbaf565f6b57f559b918bd40175000e1e11a036/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:39 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:39 compute-0 podman[98652]: 2025-11-24 18:21:39.739716033 +0000 UTC m=+0.113355146 container init 74917c36395d7e0e24d5281427c606412e895134cc003063629189df8c87a529 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 18:21:39 compute-0 podman[98652]: 2025-11-24 18:21:39.645576365 +0000 UTC m=+0.019215498 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:21:39 compute-0 podman[98652]: 2025-11-24 18:21:39.745751133 +0000 UTC m=+0.119390246 container start 74917c36395d7e0e24d5281427c606412e895134cc003063629189df8c87a529 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:21:39 compute-0 podman[98652]: 2025-11-24 18:21:39.748735567 +0000 UTC m=+0.122374680 container attach 74917c36395d7e0e24d5281427c606412e895134cc003063629189df8c87a529 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:21:39 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 41 pg[4.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=9.316533089s) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active pruub 75.137977600s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:39 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 41 pg[6.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=41 pruub=12.344822884s) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active pruub 78.166275024s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:39 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 41 pg[4.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=9.316533089s) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown pruub 75.137977600s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:39 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 41 pg[6.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=41 pruub=12.344822884s) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown pruub 78.166275024s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 sudo[98750]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxvhtjirerdcqcowalbolooolwnfkffk ; /usr/bin/python3'
Nov 24 18:21:40 compute-0 sudo[98750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:21:40 compute-0 python3[98752]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 18:21:40 compute-0 sudo[98750]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:40 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 24 18:21:40 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 18:21:40 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 18:21:40 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 18:21:40 compute-0 ceph-mon[74927]: osdmap e41: 3 total, 3 up, 3 in
Nov 24 18:21:40 compute-0 ceph-mon[74927]: from='client.14250 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:21:40 compute-0 ceph-mon[74927]: Saving service mds.cephfs spec with placement compute-0
Nov 24 18:21:40 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:40 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:40 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Nov 24 18:21:40 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Nov 24 18:21:40 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.1c( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.1f( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.10( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.17( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.8( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.a( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.b( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.15( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.17( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.1a( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.16( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.14( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.15( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.17( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.6( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.14( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.16( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.13( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.11( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.e( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.d( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.1b( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.12( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.10( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.13( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.18( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.10( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.11( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.12( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.f( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.d( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.e( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.c( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.d( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.f( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.c( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.e( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.2( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.2( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.1( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.3( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.3( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.1( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.1b( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.6( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.9( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.19( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.4( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.b( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.1a( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.18( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.5( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.7( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.a( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.8( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.1b( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.19( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.6( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.4( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.b( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.9( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.1a( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.7( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.5( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.8( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.a( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.1e( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.1c( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.1c( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.1d( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.1e( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.1d( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.1f( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.1f( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.16( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.15( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.14( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.1a( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.15( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.14( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.16( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.13( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.11( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.17( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.17( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.12( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v96: 162 pgs: 124 unknown, 38 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:40 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 24 18:21:40 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.1c( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.1f( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.10( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.14( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.12( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.10( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.17( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.8( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.a( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.c( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.b( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.0( empty local-lis/les=38/42 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.0( empty local-lis/les=41/42 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.1( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.e( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.6( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.e( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.d( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.1b( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.1e( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.10( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.13( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.12( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.11( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.e( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.d( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.f( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.f( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.c( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.d( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.c( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.0( empty local-lis/les=41/42 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.2( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.e( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.0( empty local-lis/les=41/42 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.3( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.1( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.2( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.1( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.18( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.1b( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.3( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.6( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.1a( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.10( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.18( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.19( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.4( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.5( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.9( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.7( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.4( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.19( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.b( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.8( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.1b( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.b( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.9( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.7( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.5( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.a( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.6( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.8( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.1e( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.1c( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.1c( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.1e( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.1f( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.1d( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.1f( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.1d( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.a( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:40 compute-0 sudo[98831]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjvgudvrrxcfoqinbjongzkfwncdjswv ; /usr/bin/python3'
Nov 24 18:21:40 compute-0 sudo[98831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:21:40 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Nov 24 18:21:40 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Nov 24 18:21:40 compute-0 python3[98835]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764008499.9765463-36892-81734009311733/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=da81228d7cc67f3a06b39ee156e276fa0a4ebf0e backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:21:40 compute-0 sudo[98831]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:40 compute-0 condescending_carson[98670]: {
Nov 24 18:21:40 compute-0 condescending_carson[98670]:     "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 18:21:40 compute-0 condescending_carson[98670]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:21:40 compute-0 condescending_carson[98670]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 18:21:40 compute-0 condescending_carson[98670]:         "osd_id": 0,
Nov 24 18:21:40 compute-0 condescending_carson[98670]:         "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:21:40 compute-0 condescending_carson[98670]:         "type": "bluestore"
Nov 24 18:21:40 compute-0 condescending_carson[98670]:     },
Nov 24 18:21:40 compute-0 condescending_carson[98670]:     "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 18:21:40 compute-0 condescending_carson[98670]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:21:40 compute-0 condescending_carson[98670]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 18:21:40 compute-0 condescending_carson[98670]:         "osd_id": 1,
Nov 24 18:21:40 compute-0 condescending_carson[98670]:         "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:21:40 compute-0 condescending_carson[98670]:         "type": "bluestore"
Nov 24 18:21:40 compute-0 condescending_carson[98670]:     },
Nov 24 18:21:40 compute-0 condescending_carson[98670]:     "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 18:21:40 compute-0 condescending_carson[98670]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:21:40 compute-0 condescending_carson[98670]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 18:21:40 compute-0 condescending_carson[98670]:         "osd_id": 2,
Nov 24 18:21:40 compute-0 condescending_carson[98670]:         "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:21:40 compute-0 condescending_carson[98670]:         "type": "bluestore"
Nov 24 18:21:40 compute-0 condescending_carson[98670]:     }
Nov 24 18:21:40 compute-0 condescending_carson[98670]: }
Nov 24 18:21:40 compute-0 systemd[1]: libpod-74917c36395d7e0e24d5281427c606412e895134cc003063629189df8c87a529.scope: Deactivated successfully.
Nov 24 18:21:40 compute-0 systemd[1]: libpod-74917c36395d7e0e24d5281427c606412e895134cc003063629189df8c87a529.scope: Consumed 1.002s CPU time.
Nov 24 18:21:40 compute-0 podman[98652]: 2025-11-24 18:21:40.748498212 +0000 UTC m=+1.122137325 container died 74917c36395d7e0e24d5281427c606412e895134cc003063629189df8c87a529 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_carson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:21:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-09acff362b97eea7c684db1d5cbaf565f6b57f559b918bd40175000e1e11a036-merged.mount: Deactivated successfully.
Nov 24 18:21:40 compute-0 podman[98652]: 2025-11-24 18:21:40.798333259 +0000 UTC m=+1.171972372 container remove 74917c36395d7e0e24d5281427c606412e895134cc003063629189df8c87a529 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 18:21:40 compute-0 systemd[1]: libpod-conmon-74917c36395d7e0e24d5281427c606412e895134cc003063629189df8c87a529.scope: Deactivated successfully.
Nov 24 18:21:40 compute-0 sudo[98518]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:40 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:21:40 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:40 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:21:40 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:40 compute-0 sudo[98892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:40 compute-0 sudo[98892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:40 compute-0 sudo[98892]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:40 compute-0 sudo[98917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:21:40 compute-0 sudo[98917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:40 compute-0 sudo[98917]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:40 compute-0 sudo[98977]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsrezhzqhqorlfzegvicifcfmkyjrrtj ; /usr/bin/python3'
Nov 24 18:21:40 compute-0 sudo[98977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:21:40 compute-0 sudo[98954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:40 compute-0 sudo[98954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:40 compute-0 sudo[98954]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:41 compute-0 sudo[98993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:21:41 compute-0 sudo[98993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:41 compute-0 sudo[98993]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:41 compute-0 sudo[99018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:41 compute-0 sudo[99018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:41 compute-0 python3[98990]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:21:41 compute-0 sudo[99018]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:41 compute-0 sudo[99044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 24 18:21:41 compute-0 sudo[99044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:41 compute-0 podman[99043]: 2025-11-24 18:21:41.19303861 +0000 UTC m=+0.075512776 container create 33a56513b5692696c799ac308e5e8bc1fbd70269d4f346b766948b42156beefd (image=quay.io/ceph/ceph:v18, name=objective_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 18:21:41 compute-0 podman[99043]: 2025-11-24 18:21:41.137570033 +0000 UTC m=+0.020044209 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:21:41 compute-0 systemd[1]: Started libpod-conmon-33a56513b5692696c799ac308e5e8bc1fbd70269d4f346b766948b42156beefd.scope.
Nov 24 18:21:41 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d99034a3dcec2f20796f9ccbf38ab74f84ba5091770537005e2439ba22a349b0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d99034a3dcec2f20796f9ccbf38ab74f84ba5091770537005e2439ba22a349b0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:41 compute-0 podman[99043]: 2025-11-24 18:21:41.278537323 +0000 UTC m=+0.161011529 container init 33a56513b5692696c799ac308e5e8bc1fbd70269d4f346b766948b42156beefd (image=quay.io/ceph/ceph:v18, name=objective_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:21:41 compute-0 podman[99043]: 2025-11-24 18:21:41.284185063 +0000 UTC m=+0.166659239 container start 33a56513b5692696c799ac308e5e8bc1fbd70269d4f346b766948b42156beefd (image=quay.io/ceph/ceph:v18, name=objective_lamarr, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:21:41 compute-0 podman[99043]: 2025-11-24 18:21:41.287956727 +0000 UTC m=+0.170430893 container attach 33a56513b5692696c799ac308e5e8bc1fbd70269d4f346b766948b42156beefd (image=quay.io/ceph/ceph:v18, name=objective_lamarr, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:21:41 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Nov 24 18:21:41 compute-0 ceph-mon[74927]: osdmap e42: 3 total, 3 up, 3 in
Nov 24 18:21:41 compute-0 ceph-mon[74927]: pgmap v96: 162 pgs: 124 unknown, 38 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:41 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 18:21:41 compute-0 ceph-mon[74927]: 3.1 scrub starts
Nov 24 18:21:41 compute-0 ceph-mon[74927]: 3.1 scrub ok
Nov 24 18:21:41 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:41 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:41 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 18:21:41 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Nov 24 18:21:41 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Nov 24 18:21:41 compute-0 podman[99156]: 2025-11-24 18:21:41.583216148 +0000 UTC m=+0.062017701 container exec 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 18:21:41 compute-0 podman[99156]: 2025-11-24 18:21:41.683494508 +0000 UTC m=+0.162296011 container exec_died 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:21:41 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0) v1
Nov 24 18:21:41 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/173009766' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 24 18:21:41 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/173009766' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 24 18:21:41 compute-0 systemd[1]: libpod-33a56513b5692696c799ac308e5e8bc1fbd70269d4f346b766948b42156beefd.scope: Deactivated successfully.
Nov 24 18:21:41 compute-0 podman[99043]: 2025-11-24 18:21:41.891270258 +0000 UTC m=+0.773744444 container died 33a56513b5692696c799ac308e5e8bc1fbd70269d4f346b766948b42156beefd (image=quay.io/ceph/ceph:v18, name=objective_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:21:41 compute-0 systemd[76548]: Starting Mark boot as successful...
Nov 24 18:21:41 compute-0 systemd[76548]: Finished Mark boot as successful.
Nov 24 18:21:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-d99034a3dcec2f20796f9ccbf38ab74f84ba5091770537005e2439ba22a349b0-merged.mount: Deactivated successfully.
Nov 24 18:21:41 compute-0 podman[99043]: 2025-11-24 18:21:41.950444347 +0000 UTC m=+0.832918503 container remove 33a56513b5692696c799ac308e5e8bc1fbd70269d4f346b766948b42156beefd (image=quay.io/ceph/ceph:v18, name=objective_lamarr, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:21:41 compute-0 systemd[1]: libpod-conmon-33a56513b5692696c799ac308e5e8bc1fbd70269d4f346b766948b42156beefd.scope: Deactivated successfully.
Nov 24 18:21:41 compute-0 sudo[98977]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:42 compute-0 sudo[99044]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:21:42 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:21:42 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:21:42 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:21:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:21:42 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:21:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:21:42 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:42 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 244c4978-a219-461b-8478-c00dfaec020c does not exist
Nov 24 18:21:42 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev d7862420-66f0-4daa-a174-5dddf2dad6c5 does not exist
Nov 24 18:21:42 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 2c1d3e0c-d978-40ca-8c5c-3a70074592e1 does not exist
Nov 24 18:21:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 18:21:42 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:21:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 18:21:42 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:21:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:21:42 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:21:42 compute-0 sudo[99311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:42 compute-0 sudo[99311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:42 compute-0 sudo[99311]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:42 compute-0 sudo[99380]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clrrssfaikxwmjdtbconwlnwjweutouk ; /usr/bin/python3'
Nov 24 18:21:42 compute-0 sudo[99380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:21:42 compute-0 sudo[99339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:21:42 compute-0 sudo[99339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:21:42 compute-0 sudo[99339]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:42 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v98: 193 pgs: 155 unknown, 38 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:42 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 18:21:42 compute-0 ceph-mon[74927]: osdmap e43: 3 total, 3 up, 3 in
Nov 24 18:21:42 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/173009766' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 24 18:21:42 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/173009766' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 24 18:21:42 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:42 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:42 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:21:42 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:21:42 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:42 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:21:42 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:21:42 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:21:42 compute-0 sudo[99387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:42 compute-0 sudo[99387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:42 compute-0 sudo[99387]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:42 compute-0 python3[99384]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:21:42 compute-0 sudo[99412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 18:21:42 compute-0 sudo[99412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 43 pg[7.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=43 pruub=10.489816666s) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active pruub 72.722373962s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 43 pg[7.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=43 pruub=10.489816666s) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown pruub 72.722373962s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:42 compute-0 podman[99436]: 2025-11-24 18:21:42.65127706 +0000 UTC m=+0.049639474 container create e6ebb17cae0841fd8b98f98277fa15d11f2b6490e816ac159dbf8b0814fc189c (image=quay.io/ceph/ceph:v18, name=angry_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 24 18:21:42 compute-0 systemd[1]: Started libpod-conmon-e6ebb17cae0841fd8b98f98277fa15d11f2b6490e816ac159dbf8b0814fc189c.scope.
Nov 24 18:21:42 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Nov 24 18:21:42 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Nov 24 18:21:42 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a5a62ecd660fb53520bd08e8734e25145d67f346126c342e3491804cba44e6b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a5a62ecd660fb53520bd08e8734e25145d67f346126c342e3491804cba44e6b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:42 compute-0 podman[99436]: 2025-11-24 18:21:42.714674164 +0000 UTC m=+0.113036608 container init e6ebb17cae0841fd8b98f98277fa15d11f2b6490e816ac159dbf8b0814fc189c (image=quay.io/ceph/ceph:v18, name=angry_dewdney, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 18:21:42 compute-0 podman[99436]: 2025-11-24 18:21:42.720786086 +0000 UTC m=+0.119148490 container start e6ebb17cae0841fd8b98f98277fa15d11f2b6490e816ac159dbf8b0814fc189c (image=quay.io/ceph/ceph:v18, name=angry_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:21:42 compute-0 podman[99436]: 2025-11-24 18:21:42.72340134 +0000 UTC m=+0.121763754 container attach e6ebb17cae0841fd8b98f98277fa15d11f2b6490e816ac159dbf8b0814fc189c (image=quay.io/ceph/ceph:v18, name=angry_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 24 18:21:42 compute-0 podman[99436]: 2025-11-24 18:21:42.633810456 +0000 UTC m=+0.032172890 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:21:42 compute-0 podman[99497]: 2025-11-24 18:21:42.881615919 +0000 UTC m=+0.025082564 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:21:43 compute-0 podman[99497]: 2025-11-24 18:21:43.036762862 +0000 UTC m=+0.180229497 container create 9b0c7ded593ad1f47bbc2daa9ffbfbb435342a9c497bf51ec319f22b86fc087c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_montalcini, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:21:43 compute-0 systemd[1]: Started libpod-conmon-9b0c7ded593ad1f47bbc2daa9ffbfbb435342a9c497bf51ec319f22b86fc087c.scope.
Nov 24 18:21:43 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:43 compute-0 podman[99497]: 2025-11-24 18:21:43.222415771 +0000 UTC m=+0.365882416 container init 9b0c7ded593ad1f47bbc2daa9ffbfbb435342a9c497bf51ec319f22b86fc087c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 18:21:43 compute-0 podman[99497]: 2025-11-24 18:21:43.227471747 +0000 UTC m=+0.370938372 container start 9b0c7ded593ad1f47bbc2daa9ffbfbb435342a9c497bf51ec319f22b86fc087c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:21:43 compute-0 great_montalcini[99533]: 167 167
Nov 24 18:21:43 compute-0 systemd[1]: libpod-9b0c7ded593ad1f47bbc2daa9ffbfbb435342a9c497bf51ec319f22b86fc087c.scope: Deactivated successfully.
Nov 24 18:21:43 compute-0 conmon[99533]: conmon 9b0c7ded593ad1f47bbc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9b0c7ded593ad1f47bbc2daa9ffbfbb435342a9c497bf51ec319f22b86fc087c.scope/container/memory.events
Nov 24 18:21:43 compute-0 podman[99497]: 2025-11-24 18:21:43.292642435 +0000 UTC m=+0.436109080 container attach 9b0c7ded593ad1f47bbc2daa9ffbfbb435342a9c497bf51ec319f22b86fc087c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_montalcini, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 18:21:43 compute-0 podman[99497]: 2025-11-24 18:21:43.293022375 +0000 UTC m=+0.436489000 container died 9b0c7ded593ad1f47bbc2daa9ffbfbb435342a9c497bf51ec319f22b86fc087c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 24 18:21:43 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 24 18:21:43 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/292142713' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 24 18:21:43 compute-0 angry_dewdney[99454]: 
Nov 24 18:21:43 compute-0 angry_dewdney[99454]: {"fsid":"e5ee928f-099b-569b-93c9-ecf025cbb50d","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":175,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":43,"num_osds":3,"num_up_osds":3,"osd_up_since":1764008452,"num_in_osds":3,"osd_in_since":1764008421,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"unknown","count":124},{"state_name":"active+clean","count":38}],"num_pgs":162,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":84004864,"bytes_avail":64327921664,"bytes_total":64411926528,"unknown_pgs_ratio":0.76543211936950684},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-24T18:20:36.466398+0000","services":{}},"progress_events":{}}
Nov 24 18:21:43 compute-0 systemd[1]: libpod-e6ebb17cae0841fd8b98f98277fa15d11f2b6490e816ac159dbf8b0814fc189c.scope: Deactivated successfully.
Nov 24 18:21:43 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Nov 24 18:21:43 compute-0 ceph-mon[74927]: pgmap v98: 193 pgs: 155 unknown, 38 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:43 compute-0 ceph-mon[74927]: 4.1 scrub starts
Nov 24 18:21:43 compute-0 ceph-mon[74927]: 4.1 scrub ok
Nov 24 18:21:43 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/292142713' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 24 18:21:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-a01cf1f0b868e1f16b9de09e33e0bda492cae98f3bc79edb82621e97fb3d466c-merged.mount: Deactivated successfully.
Nov 24 18:21:43 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Nov 24 18:21:43 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Nov 24 18:21:43 compute-0 podman[99436]: 2025-11-24 18:21:43.5030505 +0000 UTC m=+0.901412934 container died e6ebb17cae0841fd8b98f98277fa15d11f2b6490e816ac159dbf8b0814fc189c (image=quay.io/ceph/ceph:v18, name=angry_dewdney, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.1e( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.1d( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.13( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.1c( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.12( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.11( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.10( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.17( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.16( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.b( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.14( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.7( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.d( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.19( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.1e( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.1d( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.10( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.16( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.17( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.12( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.b( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.14( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.0( empty local-lis/les=43/44 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.7( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.d( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.19( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:43 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a5a62ecd660fb53520bd08e8734e25145d67f346126c342e3491804cba44e6b-merged.mount: Deactivated successfully.
Nov 24 18:21:43 compute-0 podman[99497]: 2025-11-24 18:21:43.541134496 +0000 UTC m=+0.684601121 container remove 9b0c7ded593ad1f47bbc2daa9ffbfbb435342a9c497bf51ec319f22b86fc087c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:21:43 compute-0 podman[99551]: 2025-11-24 18:21:43.546446868 +0000 UTC m=+0.210151810 container remove e6ebb17cae0841fd8b98f98277fa15d11f2b6490e816ac159dbf8b0814fc189c (image=quay.io/ceph/ceph:v18, name=angry_dewdney, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:21:43 compute-0 systemd[1]: libpod-conmon-e6ebb17cae0841fd8b98f98277fa15d11f2b6490e816ac159dbf8b0814fc189c.scope: Deactivated successfully.
Nov 24 18:21:43 compute-0 sudo[99380]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:43 compute-0 systemd[1]: libpod-conmon-9b0c7ded593ad1f47bbc2daa9ffbfbb435342a9c497bf51ec319f22b86fc087c.scope: Deactivated successfully.
Nov 24 18:21:43 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Nov 24 18:21:43 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Nov 24 18:21:43 compute-0 sudo[99610]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxfczbptpcdmpmnenbpdvsofvtrdhcyn ; /usr/bin/python3'
Nov 24 18:21:43 compute-0 sudo[99610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:21:43 compute-0 podman[99580]: 2025-11-24 18:21:43.716336836 +0000 UTC m=+0.047885710 container create dfe9aae0b2691904230297558da3d083f89159c7501ef470ff1f0237191896a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:21:43 compute-0 systemd[1]: Started libpod-conmon-dfe9aae0b2691904230297558da3d083f89159c7501ef470ff1f0237191896a0.scope.
Nov 24 18:21:43 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2cbed3599b370eeec8c4e0f56780c050e1a9bcbb3d783d5c824e8c7d8b55305/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2cbed3599b370eeec8c4e0f56780c050e1a9bcbb3d783d5c824e8c7d8b55305/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2cbed3599b370eeec8c4e0f56780c050e1a9bcbb3d783d5c824e8c7d8b55305/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2cbed3599b370eeec8c4e0f56780c050e1a9bcbb3d783d5c824e8c7d8b55305/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2cbed3599b370eeec8c4e0f56780c050e1a9bcbb3d783d5c824e8c7d8b55305/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:43 compute-0 podman[99580]: 2025-11-24 18:21:43.694973785 +0000 UTC m=+0.026522719 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:21:43 compute-0 podman[99580]: 2025-11-24 18:21:43.792394464 +0000 UTC m=+0.123943358 container init dfe9aae0b2691904230297558da3d083f89159c7501ef470ff1f0237191896a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:21:43 compute-0 podman[99580]: 2025-11-24 18:21:43.800169358 +0000 UTC m=+0.131718242 container start dfe9aae0b2691904230297558da3d083f89159c7501ef470ff1f0237191896a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_murdock, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:21:43 compute-0 podman[99580]: 2025-11-24 18:21:43.803920201 +0000 UTC m=+0.135469085 container attach dfe9aae0b2691904230297558da3d083f89159c7501ef470ff1f0237191896a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:21:43 compute-0 python3[99613]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:21:43 compute-0 podman[99621]: 2025-11-24 18:21:43.936697978 +0000 UTC m=+0.037380580 container create b11f347d6a53010cb1b3143ff38ebb6057e5148b7628c897043a6294f6bbfcf4 (image=quay.io/ceph/ceph:v18, name=jovial_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 18:21:43 compute-0 systemd[1]: Started libpod-conmon-b11f347d6a53010cb1b3143ff38ebb6057e5148b7628c897043a6294f6bbfcf4.scope.
Nov 24 18:21:43 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/932e71e10fd8050a5af6e3d91fc5678c868bc60eaf9ab993b773e3d0dc238597/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/932e71e10fd8050a5af6e3d91fc5678c868bc60eaf9ab993b773e3d0dc238597/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:44 compute-0 podman[99621]: 2025-11-24 18:21:44.008052129 +0000 UTC m=+0.108734741 container init b11f347d6a53010cb1b3143ff38ebb6057e5148b7628c897043a6294f6bbfcf4 (image=quay.io/ceph/ceph:v18, name=jovial_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 18:21:44 compute-0 podman[99621]: 2025-11-24 18:21:43.919558622 +0000 UTC m=+0.020241234 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:21:44 compute-0 podman[99621]: 2025-11-24 18:21:44.020001326 +0000 UTC m=+0.120683938 container start b11f347d6a53010cb1b3143ff38ebb6057e5148b7628c897043a6294f6bbfcf4 (image=quay.io/ceph/ceph:v18, name=jovial_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 18:21:44 compute-0 podman[99621]: 2025-11-24 18:21:44.023582355 +0000 UTC m=+0.124264967 container attach b11f347d6a53010cb1b3143ff38ebb6057e5148b7628c897043a6294f6bbfcf4 (image=quay.io/ceph/ceph:v18, name=jovial_borg, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 18:21:44 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v100: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:44 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 24 18:21:44 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 18:21:44 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 24 18:21:44 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 18:21:44 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 24 18:21:44 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 18:21:44 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 24 18:21:44 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 18:21:44 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 24 18:21:44 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 18:21:44 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 24 18:21:44 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 18:21:44 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Nov 24 18:21:44 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 18:21:44 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 18:21:44 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 18:21:44 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 18:21:44 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 18:21:44 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 18:21:44 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Nov 24 18:21:44 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972961426s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565208435s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972451210s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 68.564727783s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.18( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972639084s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.564918518s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.1b( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972542763s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.564819336s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972915649s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565208435s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.18( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972589493s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.564918518s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972389221s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.564727783s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.1b( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972466469s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.564819336s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.19( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972216606s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.564704895s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.17( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972378731s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.564872742s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.17( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972356796s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.564872742s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.19( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972193718s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.564704895s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972417831s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 68.564964294s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972393990s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.564964294s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.12( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972384453s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565002441s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.15( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972573280s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565185547s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.12( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972368240s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565002441s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.15( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972553253s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565185547s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972796440s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565521240s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.13( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972274780s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565032959s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972774506s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565521240s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972763062s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565551758s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.13( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972251892s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565032959s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972743988s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565551758s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.11( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972208023s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565055847s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.11( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972194672s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565055847s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972286224s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565185547s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972272873s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565185547s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.16( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.971922874s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.564880371s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.f( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972307205s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565269470s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972374916s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565353394s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.16( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.971899986s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.564880371s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972357750s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565353394s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.f( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972282410s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565269470s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.d( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972621918s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565628052s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.d( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972594261s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565628052s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972410202s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565559387s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.7( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972494125s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565673828s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.7( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972476006s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565673828s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972030640s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565261841s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.2( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.973052979s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.566291809s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972017288s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565261841s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.2( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.973031044s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.566291809s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972310066s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565605164s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-mon[74927]: osdmap e44: 3 total, 3 up, 3 in
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972296715s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565605164s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.3( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972310066s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565628052s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.3( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972296715s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565628052s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-mon[74927]: 4.2 scrub starts
Nov 24 18:21:44 compute-0 ceph-mon[74927]: 4.2 scrub ok
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.4( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972307205s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565689087s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.4( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972433090s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565826416s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.4( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972291946s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565689087s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972815514s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 68.566238403s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.4( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972408295s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565826416s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972800255s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.566238403s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 18:21:44 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.5( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972352982s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565834045s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.5( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972339630s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565834045s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972348213s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565872192s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972334862s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565872192s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.6( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972292900s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565856934s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972325325s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565895081s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972311974s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565895081s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.6( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972267151s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565856934s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.8( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972247124s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565879822s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.8( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972230911s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565879822s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972231865s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565917969s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.9( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972186089s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565910339s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972208977s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565917969s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.9( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972166061s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565910339s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.a( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972162247s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565940857s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.b( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972177505s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565963745s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972281456s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 68.566093445s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.b( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972155571s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565963745s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972265244s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.566093445s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972393036s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565559387s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.a( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972136497s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565940857s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.1c( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972146988s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.566017151s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.1c( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972131729s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.566017151s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.1d( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972361565s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.566284180s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972232819s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 68.566177368s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.1d( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972347260s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.566284180s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972199440s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 68.566184998s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972184181s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.566184998s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.1f( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972196579s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.566215515s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972215652s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.566177368s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.1f( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972173691s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.566215515s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972187996s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 68.566246033s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972173691s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.566246033s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[5.1e( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[2.19( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[2.18( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[5.7( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[2.1d( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[5.4( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[2.1c( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[2.f( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[2.2( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[5.5( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[2.1f( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[5.2( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[5.3( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[2.b( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[2.8( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[2.16( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[5.15( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[2.13( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[5.14( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[2.11( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.18( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.959789276s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541221619s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.18( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.959757805s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541221619s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.15( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.950924873s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.532524109s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.15( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.950902939s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.532524109s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.14( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.950791359s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.532554626s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.14( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.950774193s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.532554626s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.17( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.950805664s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.532661438s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.17( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.950791359s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.532661438s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.950683594s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.532623291s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.950666428s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.532623291s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.13( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.950606346s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.532638550s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.13( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.950592041s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.532638550s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.11( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.950521469s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.532646179s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.987396240s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 79.113769531s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.11( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.950506210s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.532646179s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.12( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.950457573s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.532661438s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.12( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.950446129s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.532661438s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.987370491s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.113769531s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.795005798s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.921493530s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.794991493s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.921493530s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.794899940s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.921485901s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.794883728s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.921485901s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.794740677s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.921501160s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.794722557s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.921501160s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.794586182s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.921455383s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.794571877s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.921455383s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.986700058s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 79.113601685s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.986670494s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.113601685s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.794425011s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.921447754s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.794410706s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.921447754s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.986455917s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 79.113601685s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.794167519s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.921363831s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.986596107s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 79.113800049s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.986581802s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.113800049s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.794148445s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.921363831s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.794034004s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.921386719s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.986531258s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 79.113906860s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.794012070s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.921386719s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.986501694s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.113906860s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.793845177s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.921371460s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.793884277s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.921424866s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.793824196s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.921371460s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.793861389s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.921424866s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.986217499s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 79.113868713s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.986203194s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.113868713s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.986048698s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 79.113838196s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.986026764s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.113838196s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.986024857s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 79.113845825s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.986001015s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.113845825s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.985937119s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 79.113868713s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.793324471s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.921279907s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.985917091s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.113868713s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.793310165s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.921279907s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.793040276s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.921104431s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.985826492s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 79.113883972s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.793024063s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.921104431s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.985806465s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.113883972s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.792888641s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.921096802s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.985737801s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 79.113967896s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.792870522s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.921096802s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.985715866s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.113967896s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.792678833s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.920997620s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.11( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.954547882s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541053772s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.11( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.954516411s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541053772s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.13( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.954245567s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.540954590s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.13( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.954216957s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.540954590s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.f( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.954298973s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541084290s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.f( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.954276085s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541084290s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.d( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.954187393s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541076660s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.d( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.954154968s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541076660s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.e( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.954084396s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541061401s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.e( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.954063416s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541061401s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.10( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.954076767s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.540954590s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.c( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.954051971s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541114807s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.c( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.954023361s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541114807s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.10( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953877449s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.540954590s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953915596s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541107178s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953892708s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541107178s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.e( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953944206s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541160583s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.e( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953897476s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541160583s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.2( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953869820s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541152954s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.d( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953824997s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541107178s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.2( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953847885s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541152954s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.d( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953803062s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541107178s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953815460s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541213989s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953799248s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541213989s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.1( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953764915s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541221619s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.1( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953743935s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541221619s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.4( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953792572s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541343689s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.4( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953776360s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541343689s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.6( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953671455s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541275024s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.6( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953650475s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541275024s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.9( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953734398s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541397095s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.1( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953518867s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541198730s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.1( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953451157s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541198730s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.9( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953720093s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541397095s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953609467s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541419983s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953585625s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541419983s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.5( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953466415s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541366577s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.5( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953445435s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541366577s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.a( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953682899s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541656494s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.a( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953592300s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541656494s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.8( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953296661s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541427612s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.8( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953273773s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541427612s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.1b( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953255653s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541473389s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.1b( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953228951s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541473389s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.1a( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953012466s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541305542s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.4( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953123093s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541427612s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.4( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953100204s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541427612s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.1a( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.952962875s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541305542s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.7( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953118324s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541503906s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.7( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953091621s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541503906s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.8( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953083038s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541542053s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.8( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953059196s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541542053s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.1e( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.952995300s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541542053s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953011513s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541564941s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.1f( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.952999115s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541603088s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.1e( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.952971458s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541542053s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.1f( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.952984810s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541603088s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.1c( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.952826500s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541542053s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.1c( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.952803612s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541542053s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.1d( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.952827454s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541595459s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.1d( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.952803612s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541595459s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.952991486s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541564941s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[3.17( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[3.15( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[3.12( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[3.f( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[3.c( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[3.1( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[3.3( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[3.6( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[6.15( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[6.14( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[6.11( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.792664528s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.920997620s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.985569000s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 79.113922119s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.985550880s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.113922119s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.792537689s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.920959473s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.792518616s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.920959473s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.985431671s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 79.113929749s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.985412598s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.113929749s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.792406082s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.920959473s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.792390823s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.920959473s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.985373497s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 79.114021301s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.792285919s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.920936584s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.985342979s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.114021301s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.984911919s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.113601685s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.792072296s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.920852661s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.985165596s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 79.113967896s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.792122841s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.920936584s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.985146523s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.113967896s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.792009354s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.920852661s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.792015076s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.920944214s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[3.a( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.792001724s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.920944214s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.984887123s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 79.113845825s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[3.1b( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.984868050s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.113845825s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.984975815s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 79.114006042s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.984963417s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.114006042s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[7.1b( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.791745186s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.920829773s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.791713715s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.920829773s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.984852791s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 79.113990784s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[3.1f( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.984839439s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.113990784s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[3.9( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.984765053s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 79.113990784s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.984751701s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.113990784s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.984699249s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 79.113952637s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.984679222s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.113952637s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.788194656s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.917518616s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.788181305s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.917518616s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.788196564s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.917587280s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.788178444s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.917587280s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[6.17( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[6.d( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[6.c( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[6.e( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[6.2( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[6.1( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[6.13( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[6.8( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[6.1f( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[6.4( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[6.6( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[6.1e( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[6.1c( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[6.1d( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:44 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 24 18:21:44 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4001581995' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 18:21:44 compute-0 jovial_borg[99636]: 
Nov 24 18:21:44 compute-0 jovial_borg[99636]: {"epoch":1,"fsid":"e5ee928f-099b-569b-93c9-ecf025cbb50d","modified":"2025-11-24T18:18:43.057635Z","created":"2025-11-24T18:18:43.057635Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Nov 24 18:21:44 compute-0 jovial_borg[99636]: dumped monmap epoch 1
Nov 24 18:21:44 compute-0 systemd[1]: libpod-b11f347d6a53010cb1b3143ff38ebb6057e5148b7628c897043a6294f6bbfcf4.scope: Deactivated successfully.
Nov 24 18:21:44 compute-0 podman[99678]: 2025-11-24 18:21:44.692135036 +0000 UTC m=+0.036341713 container died b11f347d6a53010cb1b3143ff38ebb6057e5148b7628c897043a6294f6bbfcf4 (image=quay.io/ceph/ceph:v18, name=jovial_borg, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:21:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-932e71e10fd8050a5af6e3d91fc5678c868bc60eaf9ab993b773e3d0dc238597-merged.mount: Deactivated successfully.
Nov 24 18:21:44 compute-0 podman[99678]: 2025-11-24 18:21:44.7289513 +0000 UTC m=+0.073157987 container remove b11f347d6a53010cb1b3143ff38ebb6057e5148b7628c897043a6294f6bbfcf4 (image=quay.io/ceph/ceph:v18, name=jovial_borg, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 18:21:44 compute-0 systemd[1]: libpod-conmon-b11f347d6a53010cb1b3143ff38ebb6057e5148b7628c897043a6294f6bbfcf4.scope: Deactivated successfully.
Nov 24 18:21:44 compute-0 vibrant_murdock[99616]: --> passed data devices: 0 physical, 3 LVM
Nov 24 18:21:44 compute-0 vibrant_murdock[99616]: --> relative data size: 1.0
Nov 24 18:21:44 compute-0 vibrant_murdock[99616]: --> All data devices are unavailable
Nov 24 18:21:44 compute-0 sudo[99610]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:44 compute-0 systemd[1]: libpod-dfe9aae0b2691904230297558da3d083f89159c7501ef470ff1f0237191896a0.scope: Deactivated successfully.
Nov 24 18:21:44 compute-0 podman[99580]: 2025-11-24 18:21:44.773432915 +0000 UTC m=+1.104981829 container died dfe9aae0b2691904230297558da3d083f89159c7501ef470ff1f0237191896a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:21:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2cbed3599b370eeec8c4e0f56780c050e1a9bcbb3d783d5c824e8c7d8b55305-merged.mount: Deactivated successfully.
Nov 24 18:21:44 compute-0 podman[99580]: 2025-11-24 18:21:44.831807364 +0000 UTC m=+1.163356238 container remove dfe9aae0b2691904230297558da3d083f89159c7501ef470ff1f0237191896a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:21:44 compute-0 systemd[1]: libpod-conmon-dfe9aae0b2691904230297558da3d083f89159c7501ef470ff1f0237191896a0.scope: Deactivated successfully.
Nov 24 18:21:44 compute-0 sudo[99412]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:44 compute-0 sudo[99713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:44 compute-0 sudo[99713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:44 compute-0 sudo[99713]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:44 compute-0 sudo[99738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:21:44 compute-0 sudo[99738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:44 compute-0 sudo[99738]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:45 compute-0 sudo[99763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:45 compute-0 sudo[99763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:45 compute-0 sudo[99763]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:45 compute-0 sudo[99788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- lvm list --format json
Nov 24 18:21:45 compute-0 sudo[99788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:45 compute-0 sudo[99835]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htctdtlknjzvfpczpfporihprncdkjkc ; /usr/bin/python3'
Nov 24 18:21:45 compute-0 sudo[99835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:21:45 compute-0 python3[99838]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:21:45 compute-0 podman[99846]: 2025-11-24 18:21:45.273584814 +0000 UTC m=+0.033596965 container create 8d2c76d2f7f13e19df85254b818e65421d8b25187bc08a500663183f349a15c8 (image=quay.io/ceph/ceph:v18, name=condescending_darwin, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:21:45 compute-0 systemd[1]: Started libpod-conmon-8d2c76d2f7f13e19df85254b818e65421d8b25187bc08a500663183f349a15c8.scope.
Nov 24 18:21:45 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65e9a6dd35ff0be385c9f090f2c1bf378389e502a4a9248aea9b99a735acf7cc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65e9a6dd35ff0be385c9f090f2c1bf378389e502a4a9248aea9b99a735acf7cc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:45 compute-0 podman[99846]: 2025-11-24 18:21:45.341352526 +0000 UTC m=+0.101364697 container init 8d2c76d2f7f13e19df85254b818e65421d8b25187bc08a500663183f349a15c8 (image=quay.io/ceph/ceph:v18, name=condescending_darwin, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 18:21:45 compute-0 podman[99846]: 2025-11-24 18:21:45.346438833 +0000 UTC m=+0.106450984 container start 8d2c76d2f7f13e19df85254b818e65421d8b25187bc08a500663183f349a15c8 (image=quay.io/ceph/ceph:v18, name=condescending_darwin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:21:45 compute-0 podman[99846]: 2025-11-24 18:21:45.259667668 +0000 UTC m=+0.019679859 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:21:45 compute-0 podman[99846]: 2025-11-24 18:21:45.472242857 +0000 UTC m=+0.232255008 container attach 8d2c76d2f7f13e19df85254b818e65421d8b25187bc08a500663183f349a15c8 (image=quay.io/ceph/ceph:v18, name=condescending_darwin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:21:45 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Nov 24 18:21:45 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Nov 24 18:21:45 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Nov 24 18:21:45 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Nov 24 18:21:45 compute-0 podman[99895]: 2025-11-24 18:21:45.633604803 +0000 UTC m=+0.072180793 container create e57975a9c6266735f58c3db4c92f11dc422ec89fbca6a5b0784686e7bafdec8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:21:45 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Nov 24 18:21:45 compute-0 ceph-mon[74927]: pgmap v100: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:45 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 18:21:45 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 18:21:45 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 18:21:45 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 18:21:45 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 18:21:45 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 18:21:45 compute-0 ceph-mon[74927]: osdmap e45: 3 total, 3 up, 3 in
Nov 24 18:21:45 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/4001581995' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 18:21:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[3.12( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[7.1b( empty local-lis/les=45/46 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[3.1f( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[2.13( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[5.14( empty local-lis/les=45/46 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[5.15( empty local-lis/les=45/46 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[3.15( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[2.11( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[6.1e( empty local-lis/les=45/46 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[4.18( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[4.d( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[6.d( empty local-lis/les=45/46 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[6.c( empty local-lis/les=45/46 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[4.f( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=45/46 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[4.2( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[6.6( empty local-lis/les=45/46 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[6.4( empty local-lis/les=45/46 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[6.1( empty local-lis/les=45/46 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[4.7( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[4.9( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[4.5( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=45/46 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[6.b( empty local-lis/les=45/46 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[4.8( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[6.17( empty local-lis/les=45/46 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[4.14( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[4.12( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[4.10( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[6.1d( empty local-lis/les=45/46 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[6.1c( empty local-lis/les=45/46 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[2.1b( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[2.17( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[5.11( empty local-lis/les=45/46 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[4.4( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[2.15( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[5.12( empty local-lis/les=45/46 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[5.9( empty local-lis/les=45/46 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[5.13( empty local-lis/les=45/46 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[5.16( empty local-lis/les=45/46 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[2.d( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[2.3( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[2.5( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[5.1d( empty local-lis/les=45/46 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[2.4( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[2.6( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[5.f( empty local-lis/les=45/46 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[2.a( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[2.7( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[5.c( empty local-lis/les=45/46 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[5.1a( empty local-lis/les=45/46 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[5.1( empty local-lis/les=45/46 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[5.18( empty local-lis/les=45/46 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[5.19( empty local-lis/les=45/46 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[2.9( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[7.13( empty local-lis/les=45/46 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[3.17( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[2.16( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[3.9( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[2.8( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[3.a( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[2.b( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[7.f( empty local-lis/les=45/46 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[7.3( empty local-lis/les=45/46 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[5.3( empty local-lis/les=45/46 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[3.6( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[5.2( empty local-lis/les=45/46 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[3.3( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[2.1f( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[5.5( empty local-lis/les=45/46 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[2.2( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[2.f( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[2.1c( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[7.6( empty local-lis/les=45/46 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[7.9( empty local-lis/les=45/46 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[7.18( empty local-lis/les=45/46 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[5.4( empty local-lis/les=45/46 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[3.c( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[7.4( empty local-lis/les=45/46 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[3.f( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[7.1f( empty local-lis/les=45/46 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[3.1b( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[2.18( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[2.19( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[5.1e( empty local-lis/les=45/46 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[2.1d( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[3.1( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[5.7( empty local-lis/les=45/46 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[6.f( empty local-lis/les=45/46 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[4.e( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[4.1( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[4.1b( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[4.a( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[6.8( empty local-lis/les=45/46 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[6.15( empty local-lis/les=45/46 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[6.11( empty local-lis/les=45/46 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[6.14( empty local-lis/les=45/46 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[4.11( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[6.13( empty local-lis/les=45/46 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[4.1a( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[6.1f( empty local-lis/les=45/46 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[4.1c( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[3.18( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[7.1c( empty local-lis/les=45/46 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[4.13( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[7.11( empty local-lis/les=45/46 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[3.11( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[3.e( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[7.8( empty local-lis/les=45/46 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[7.a( empty local-lis/les=45/46 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[7.2( empty local-lis/les=45/46 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[3.16( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[7.15( empty local-lis/les=45/46 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[7.5( empty local-lis/les=45/46 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[7.1( empty local-lis/les=45/46 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[7.c( empty local-lis/les=45/46 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[3.8( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[7.e( empty local-lis/les=45/46 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[3.7( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[3.1d( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[7.1a( empty local-lis/les=45/46 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[3.1e( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[3.5( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:45 compute-0 systemd[1]: Started libpod-conmon-e57975a9c6266735f58c3db4c92f11dc422ec89fbca6a5b0784686e7bafdec8b.scope.
Nov 24 18:21:45 compute-0 podman[99895]: 2025-11-24 18:21:45.584975536 +0000 UTC m=+0.023551546 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:21:45 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:45 compute-0 podman[99895]: 2025-11-24 18:21:45.706699198 +0000 UTC m=+0.145275228 container init e57975a9c6266735f58c3db4c92f11dc422ec89fbca6a5b0784686e7bafdec8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lamport, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 18:21:45 compute-0 podman[99895]: 2025-11-24 18:21:45.712730518 +0000 UTC m=+0.151306508 container start e57975a9c6266735f58c3db4c92f11dc422ec89fbca6a5b0784686e7bafdec8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:21:45 compute-0 charming_lamport[99929]: 167 167
Nov 24 18:21:45 compute-0 systemd[1]: libpod-e57975a9c6266735f58c3db4c92f11dc422ec89fbca6a5b0784686e7bafdec8b.scope: Deactivated successfully.
Nov 24 18:21:45 compute-0 podman[99895]: 2025-11-24 18:21:45.716066041 +0000 UTC m=+0.154642061 container attach e57975a9c6266735f58c3db4c92f11dc422ec89fbca6a5b0784686e7bafdec8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 18:21:45 compute-0 podman[99895]: 2025-11-24 18:21:45.716873401 +0000 UTC m=+0.155449451 container died e57975a9c6266735f58c3db4c92f11dc422ec89fbca6a5b0784686e7bafdec8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lamport, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 24 18:21:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e159f7e3adf03a7579e7c9edfe1603c95fdcd834f91ec8a377e6991c4bc47bc-merged.mount: Deactivated successfully.
Nov 24 18:21:45 compute-0 podman[99895]: 2025-11-24 18:21:45.762319939 +0000 UTC m=+0.200895929 container remove e57975a9c6266735f58c3db4c92f11dc422ec89fbca6a5b0784686e7bafdec8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 18:21:45 compute-0 systemd[1]: libpod-conmon-e57975a9c6266735f58c3db4c92f11dc422ec89fbca6a5b0784686e7bafdec8b.scope: Deactivated successfully.
Nov 24 18:21:45 compute-0 podman[99956]: 2025-11-24 18:21:45.935736975 +0000 UTC m=+0.035507342 container create d9a02db99c7ae69374fc6b6d32e66546fa371c94a2cf7b6473a3626da3d29615 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:21:45 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Nov 24 18:21:45 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3247829842' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 24 18:21:45 compute-0 condescending_darwin[99877]: [client.openstack]
Nov 24 18:21:45 compute-0 condescending_darwin[99877]:         key = AQBqoSRpAAAAABAAwYZz6MMXWB3V3iQXlmOz0w==
Nov 24 18:21:45 compute-0 condescending_darwin[99877]:         caps mgr = "allow *"
Nov 24 18:21:45 compute-0 condescending_darwin[99877]:         caps mon = "profile rbd"
Nov 24 18:21:45 compute-0 condescending_darwin[99877]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Nov 24 18:21:45 compute-0 podman[99846]: 2025-11-24 18:21:45.976566139 +0000 UTC m=+0.736578310 container died 8d2c76d2f7f13e19df85254b818e65421d8b25187bc08a500663183f349a15c8 (image=quay.io/ceph/ceph:v18, name=condescending_darwin, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:21:45 compute-0 systemd[1]: Started libpod-conmon-d9a02db99c7ae69374fc6b6d32e66546fa371c94a2cf7b6473a3626da3d29615.scope.
Nov 24 18:21:45 compute-0 systemd[1]: libpod-8d2c76d2f7f13e19df85254b818e65421d8b25187bc08a500663183f349a15c8.scope: Deactivated successfully.
Nov 24 18:21:46 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc1803449fb1e77601f38a780de5368f45366c74961a192241c48a505bc4c336/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc1803449fb1e77601f38a780de5368f45366c74961a192241c48a505bc4c336/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc1803449fb1e77601f38a780de5368f45366c74961a192241c48a505bc4c336/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc1803449fb1e77601f38a780de5368f45366c74961a192241c48a505bc4c336/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:46 compute-0 podman[99956]: 2025-11-24 18:21:45.920249591 +0000 UTC m=+0.020019978 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:21:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-65e9a6dd35ff0be385c9f090f2c1bf378389e502a4a9248aea9b99a735acf7cc-merged.mount: Deactivated successfully.
Nov 24 18:21:46 compute-0 podman[99956]: 2025-11-24 18:21:46.021481155 +0000 UTC m=+0.121251522 container init d9a02db99c7ae69374fc6b6d32e66546fa371c94a2cf7b6473a3626da3d29615 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 24 18:21:46 compute-0 podman[99956]: 2025-11-24 18:21:46.027671277 +0000 UTC m=+0.127441644 container start d9a02db99c7ae69374fc6b6d32e66546fa371c94a2cf7b6473a3626da3d29615 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gates, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Nov 24 18:21:46 compute-0 podman[99956]: 2025-11-24 18:21:46.032389564 +0000 UTC m=+0.132159961 container attach d9a02db99c7ae69374fc6b6d32e66546fa371c94a2cf7b6473a3626da3d29615 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gates, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 18:21:46 compute-0 podman[99846]: 2025-11-24 18:21:46.037289246 +0000 UTC m=+0.797301397 container remove 8d2c76d2f7f13e19df85254b818e65421d8b25187bc08a500663183f349a15c8 (image=quay.io/ceph/ceph:v18, name=condescending_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 24 18:21:46 compute-0 systemd[1]: libpod-conmon-8d2c76d2f7f13e19df85254b818e65421d8b25187bc08a500663183f349a15c8.scope: Deactivated successfully.
Nov 24 18:21:46 compute-0 sudo[99835]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:46 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v103: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:46 compute-0 ceph-mon[74927]: 3.2 scrub starts
Nov 24 18:21:46 compute-0 ceph-mon[74927]: 3.2 scrub ok
Nov 24 18:21:46 compute-0 ceph-mon[74927]: osdmap e46: 3 total, 3 up, 3 in
Nov 24 18:21:46 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3247829842' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 24 18:21:46 compute-0 practical_gates[99975]: {
Nov 24 18:21:46 compute-0 practical_gates[99975]:     "0": [
Nov 24 18:21:46 compute-0 practical_gates[99975]:         {
Nov 24 18:21:46 compute-0 practical_gates[99975]:             "devices": [
Nov 24 18:21:46 compute-0 practical_gates[99975]:                 "/dev/loop3"
Nov 24 18:21:46 compute-0 practical_gates[99975]:             ],
Nov 24 18:21:46 compute-0 practical_gates[99975]:             "lv_name": "ceph_lv0",
Nov 24 18:21:46 compute-0 practical_gates[99975]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:21:46 compute-0 practical_gates[99975]:             "lv_size": "21470642176",
Nov 24 18:21:46 compute-0 practical_gates[99975]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:21:46 compute-0 practical_gates[99975]:             "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:21:46 compute-0 practical_gates[99975]:             "name": "ceph_lv0",
Nov 24 18:21:46 compute-0 practical_gates[99975]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:21:46 compute-0 practical_gates[99975]:             "tags": {
Nov 24 18:21:46 compute-0 practical_gates[99975]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:21:46 compute-0 practical_gates[99975]:                 "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:21:46 compute-0 practical_gates[99975]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:21:46 compute-0 practical_gates[99975]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:21:46 compute-0 practical_gates[99975]:                 "ceph.cluster_name": "ceph",
Nov 24 18:21:46 compute-0 practical_gates[99975]:                 "ceph.crush_device_class": "",
Nov 24 18:21:46 compute-0 practical_gates[99975]:                 "ceph.encrypted": "0",
Nov 24 18:21:46 compute-0 practical_gates[99975]:                 "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:21:46 compute-0 practical_gates[99975]:                 "ceph.osd_id": "0",
Nov 24 18:21:46 compute-0 practical_gates[99975]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:21:46 compute-0 practical_gates[99975]:                 "ceph.type": "block",
Nov 24 18:21:46 compute-0 practical_gates[99975]:                 "ceph.vdo": "0"
Nov 24 18:21:46 compute-0 practical_gates[99975]:             },
Nov 24 18:21:46 compute-0 practical_gates[99975]:             "type": "block",
Nov 24 18:21:46 compute-0 practical_gates[99975]:             "vg_name": "ceph_vg0"
Nov 24 18:21:46 compute-0 practical_gates[99975]:         }
Nov 24 18:21:46 compute-0 practical_gates[99975]:     ],
Nov 24 18:21:46 compute-0 practical_gates[99975]:     "1": [
Nov 24 18:21:46 compute-0 practical_gates[99975]:         {
Nov 24 18:21:46 compute-0 practical_gates[99975]:             "devices": [
Nov 24 18:21:46 compute-0 practical_gates[99975]:                 "/dev/loop4"
Nov 24 18:21:46 compute-0 practical_gates[99975]:             ],
Nov 24 18:21:46 compute-0 practical_gates[99975]:             "lv_name": "ceph_lv1",
Nov 24 18:21:46 compute-0 practical_gates[99975]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:21:46 compute-0 practical_gates[99975]:             "lv_size": "21470642176",
Nov 24 18:21:46 compute-0 practical_gates[99975]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:21:46 compute-0 practical_gates[99975]:             "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:21:46 compute-0 practical_gates[99975]:             "name": "ceph_lv1",
Nov 24 18:21:46 compute-0 practical_gates[99975]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:21:46 compute-0 practical_gates[99975]:             "tags": {
Nov 24 18:21:46 compute-0 practical_gates[99975]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:21:46 compute-0 practical_gates[99975]:                 "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:21:46 compute-0 practical_gates[99975]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:21:46 compute-0 practical_gates[99975]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:21:46 compute-0 practical_gates[99975]:                 "ceph.cluster_name": "ceph",
Nov 24 18:21:46 compute-0 practical_gates[99975]:                 "ceph.crush_device_class": "",
Nov 24 18:21:46 compute-0 practical_gates[99975]:                 "ceph.encrypted": "0",
Nov 24 18:21:46 compute-0 practical_gates[99975]:                 "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:21:46 compute-0 practical_gates[99975]:                 "ceph.osd_id": "1",
Nov 24 18:21:46 compute-0 practical_gates[99975]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:21:46 compute-0 practical_gates[99975]:                 "ceph.type": "block",
Nov 24 18:21:46 compute-0 practical_gates[99975]:                 "ceph.vdo": "0"
Nov 24 18:21:46 compute-0 practical_gates[99975]:             },
Nov 24 18:21:46 compute-0 practical_gates[99975]:             "type": "block",
Nov 24 18:21:46 compute-0 practical_gates[99975]:             "vg_name": "ceph_vg1"
Nov 24 18:21:46 compute-0 practical_gates[99975]:         }
Nov 24 18:21:46 compute-0 practical_gates[99975]:     ],
Nov 24 18:21:46 compute-0 practical_gates[99975]:     "2": [
Nov 24 18:21:46 compute-0 practical_gates[99975]:         {
Nov 24 18:21:46 compute-0 practical_gates[99975]:             "devices": [
Nov 24 18:21:46 compute-0 practical_gates[99975]:                 "/dev/loop5"
Nov 24 18:21:46 compute-0 practical_gates[99975]:             ],
Nov 24 18:21:46 compute-0 practical_gates[99975]:             "lv_name": "ceph_lv2",
Nov 24 18:21:46 compute-0 practical_gates[99975]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:21:46 compute-0 practical_gates[99975]:             "lv_size": "21470642176",
Nov 24 18:21:46 compute-0 practical_gates[99975]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:21:46 compute-0 practical_gates[99975]:             "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:21:46 compute-0 practical_gates[99975]:             "name": "ceph_lv2",
Nov 24 18:21:46 compute-0 practical_gates[99975]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:21:46 compute-0 practical_gates[99975]:             "tags": {
Nov 24 18:21:46 compute-0 practical_gates[99975]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:21:46 compute-0 practical_gates[99975]:                 "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:21:46 compute-0 practical_gates[99975]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:21:46 compute-0 practical_gates[99975]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:21:46 compute-0 practical_gates[99975]:                 "ceph.cluster_name": "ceph",
Nov 24 18:21:46 compute-0 practical_gates[99975]:                 "ceph.crush_device_class": "",
Nov 24 18:21:46 compute-0 practical_gates[99975]:                 "ceph.encrypted": "0",
Nov 24 18:21:46 compute-0 practical_gates[99975]:                 "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:21:46 compute-0 practical_gates[99975]:                 "ceph.osd_id": "2",
Nov 24 18:21:46 compute-0 practical_gates[99975]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:21:46 compute-0 practical_gates[99975]:                 "ceph.type": "block",
Nov 24 18:21:46 compute-0 practical_gates[99975]:                 "ceph.vdo": "0"
Nov 24 18:21:46 compute-0 practical_gates[99975]:             },
Nov 24 18:21:46 compute-0 practical_gates[99975]:             "type": "block",
Nov 24 18:21:46 compute-0 practical_gates[99975]:             "vg_name": "ceph_vg2"
Nov 24 18:21:46 compute-0 practical_gates[99975]:         }
Nov 24 18:21:46 compute-0 practical_gates[99975]:     ]
Nov 24 18:21:46 compute-0 practical_gates[99975]: }
Nov 24 18:21:46 compute-0 systemd[1]: libpod-d9a02db99c7ae69374fc6b6d32e66546fa371c94a2cf7b6473a3626da3d29615.scope: Deactivated successfully.
Nov 24 18:21:46 compute-0 podman[99956]: 2025-11-24 18:21:46.768303318 +0000 UTC m=+0.868073685 container died d9a02db99c7ae69374fc6b6d32e66546fa371c94a2cf7b6473a3626da3d29615 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Nov 24 18:21:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc1803449fb1e77601f38a780de5368f45366c74961a192241c48a505bc4c336-merged.mount: Deactivated successfully.
Nov 24 18:21:46 compute-0 podman[99956]: 2025-11-24 18:21:46.819026267 +0000 UTC m=+0.918796634 container remove d9a02db99c7ae69374fc6b6d32e66546fa371c94a2cf7b6473a3626da3d29615 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 18:21:46 compute-0 systemd[1]: libpod-conmon-d9a02db99c7ae69374fc6b6d32e66546fa371c94a2cf7b6473a3626da3d29615.scope: Deactivated successfully.
Nov 24 18:21:46 compute-0 sudo[99788]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:46 compute-0 sudo[100007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:46 compute-0 sudo[100007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:46 compute-0 sudo[100007]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:46 compute-0 sudo[100032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:21:46 compute-0 sudo[100032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:46 compute-0 sudo[100032]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:47 compute-0 sudo[100064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:47 compute-0 sudo[100064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:47 compute-0 sudo[100064]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:47 compute-0 sudo[100112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- raw list --format json
Nov 24 18:21:47 compute-0 sudo[100112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:47 compute-0 sudo[100292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzbvmeoqdsbcnavgtnyangsvdskiitnp ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764008507.023095-36964-273350074012948/async_wrapper.py j78697275907 30 /home/zuul/.ansible/tmp/ansible-tmp-1764008507.023095-36964-273350074012948/AnsiballZ_command.py _'
Nov 24 18:21:47 compute-0 sudo[100292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:21:47 compute-0 podman[100297]: 2025-11-24 18:21:47.370734196 +0000 UTC m=+0.035389189 container create 85d9a90a9127a81680a12d2a0d4eb3b66e1aa3ba6063f64b23fc9ac0c308a769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lamarr, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:21:47 compute-0 systemd[1]: Started libpod-conmon-85d9a90a9127a81680a12d2a0d4eb3b66e1aa3ba6063f64b23fc9ac0c308a769.scope.
Nov 24 18:21:47 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:47 compute-0 podman[100297]: 2025-11-24 18:21:47.439217977 +0000 UTC m=+0.103873010 container init 85d9a90a9127a81680a12d2a0d4eb3b66e1aa3ba6063f64b23fc9ac0c308a769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lamarr, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:21:47 compute-0 podman[100297]: 2025-11-24 18:21:47.445283578 +0000 UTC m=+0.109938581 container start 85d9a90a9127a81680a12d2a0d4eb3b66e1aa3ba6063f64b23fc9ac0c308a769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lamarr, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 24 18:21:47 compute-0 podman[100297]: 2025-11-24 18:21:47.447964484 +0000 UTC m=+0.112619507 container attach 85d9a90a9127a81680a12d2a0d4eb3b66e1aa3ba6063f64b23fc9ac0c308a769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 18:21:47 compute-0 hungry_lamarr[100314]: 167 167
Nov 24 18:21:47 compute-0 systemd[1]: libpod-85d9a90a9127a81680a12d2a0d4eb3b66e1aa3ba6063f64b23fc9ac0c308a769.scope: Deactivated successfully.
Nov 24 18:21:47 compute-0 podman[100297]: 2025-11-24 18:21:47.449707367 +0000 UTC m=+0.114362380 container died 85d9a90a9127a81680a12d2a0d4eb3b66e1aa3ba6063f64b23fc9ac0c308a769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lamarr, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:21:47 compute-0 ansible-async_wrapper.py[100296]: Invoked with j78697275907 30 /home/zuul/.ansible/tmp/ansible-tmp-1764008507.023095-36964-273350074012948/AnsiballZ_command.py _
Nov 24 18:21:47 compute-0 podman[100297]: 2025-11-24 18:21:47.354885983 +0000 UTC m=+0.019541006 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:21:47 compute-0 ansible-async_wrapper.py[100322]: Starting module and watcher
Nov 24 18:21:47 compute-0 ansible-async_wrapper.py[100322]: Start watching 100324 (30)
Nov 24 18:21:47 compute-0 ansible-async_wrapper.py[100324]: Start module (100324)
Nov 24 18:21:47 compute-0 ansible-async_wrapper.py[100296]: Return async_wrapper task started.
Nov 24 18:21:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbc45122625b33e6d365e358de528f9ef35d0a56ae62bb68bffc8923ba71ff01-merged.mount: Deactivated successfully.
Nov 24 18:21:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:21:47 compute-0 sudo[100292]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:47 compute-0 podman[100297]: 2025-11-24 18:21:47.483731472 +0000 UTC m=+0.148386465 container remove 85d9a90a9127a81680a12d2a0d4eb3b66e1aa3ba6063f64b23fc9ac0c308a769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:21:47 compute-0 systemd[1]: libpod-conmon-85d9a90a9127a81680a12d2a0d4eb3b66e1aa3ba6063f64b23fc9ac0c308a769.scope: Deactivated successfully.
Nov 24 18:21:47 compute-0 python3[100330]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:21:47 compute-0 podman[100344]: 2025-11-24 18:21:47.624342274 +0000 UTC m=+0.041569804 container create de926f0f5d83a5d743458f93c5bdd7eb8c8b61c2b1d55c7652d41f2c29b8d317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 24 18:21:47 compute-0 podman[100355]: 2025-11-24 18:21:47.650188225 +0000 UTC m=+0.040962668 container create 7c4ad23304836a36bdf83f720051f6be07e0036addd639f9ec1475b5ecfb4167 (image=quay.io/ceph/ceph:v18, name=happy_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Nov 24 18:21:47 compute-0 systemd[1]: Started libpod-conmon-de926f0f5d83a5d743458f93c5bdd7eb8c8b61c2b1d55c7652d41f2c29b8d317.scope.
Nov 24 18:21:47 compute-0 ceph-mon[74927]: pgmap v103: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:47 compute-0 systemd[1]: Started libpod-conmon-7c4ad23304836a36bdf83f720051f6be07e0036addd639f9ec1475b5ecfb4167.scope.
Nov 24 18:21:47 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5464985d7741a148a99a1b21b70c0bae3d0306d069c4f78217f62073ded689d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5464985d7741a148a99a1b21b70c0bae3d0306d069c4f78217f62073ded689d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5464985d7741a148a99a1b21b70c0bae3d0306d069c4f78217f62073ded689d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5464985d7741a148a99a1b21b70c0bae3d0306d069c4f78217f62073ded689d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:47 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aa43e2464b7e98d43837078167d08359ab9deb2a56c6a39b6838f52093dcbb7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aa43e2464b7e98d43837078167d08359ab9deb2a56c6a39b6838f52093dcbb7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:47 compute-0 podman[100344]: 2025-11-24 18:21:47.695509821 +0000 UTC m=+0.112737371 container init de926f0f5d83a5d743458f93c5bdd7eb8c8b61c2b1d55c7652d41f2c29b8d317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:21:47 compute-0 podman[100355]: 2025-11-24 18:21:47.700409472 +0000 UTC m=+0.091183935 container init 7c4ad23304836a36bdf83f720051f6be07e0036addd639f9ec1475b5ecfb4167 (image=quay.io/ceph/ceph:v18, name=happy_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:21:47 compute-0 podman[100344]: 2025-11-24 18:21:47.704841212 +0000 UTC m=+0.122068742 container start de926f0f5d83a5d743458f93c5bdd7eb8c8b61c2b1d55c7652d41f2c29b8d317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mccarthy, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Nov 24 18:21:47 compute-0 podman[100344]: 2025-11-24 18:21:47.609452724 +0000 UTC m=+0.026680274 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:21:47 compute-0 podman[100355]: 2025-11-24 18:21:47.708639477 +0000 UTC m=+0.099413920 container start 7c4ad23304836a36bdf83f720051f6be07e0036addd639f9ec1475b5ecfb4167 (image=quay.io/ceph/ceph:v18, name=happy_volhard, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 18:21:47 compute-0 podman[100344]: 2025-11-24 18:21:47.709517499 +0000 UTC m=+0.126745039 container attach de926f0f5d83a5d743458f93c5bdd7eb8c8b61c2b1d55c7652d41f2c29b8d317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 24 18:21:47 compute-0 podman[100355]: 2025-11-24 18:21:47.712498753 +0000 UTC m=+0.103273236 container attach 7c4ad23304836a36bdf83f720051f6be07e0036addd639f9ec1475b5ecfb4167 (image=quay.io/ceph/ceph:v18, name=happy_volhard, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 24 18:21:47 compute-0 podman[100355]: 2025-11-24 18:21:47.63226534 +0000 UTC m=+0.023039783 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:21:48 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 18:21:48 compute-0 happy_volhard[100379]: 
Nov 24 18:21:48 compute-0 happy_volhard[100379]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 24 18:21:48 compute-0 systemd[1]: libpod-7c4ad23304836a36bdf83f720051f6be07e0036addd639f9ec1475b5ecfb4167.scope: Deactivated successfully.
Nov 24 18:21:48 compute-0 podman[100406]: 2025-11-24 18:21:48.327169495 +0000 UTC m=+0.022856788 container died 7c4ad23304836a36bdf83f720051f6be07e0036addd639f9ec1475b5ecfb4167 (image=quay.io/ceph/ceph:v18, name=happy_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:21:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-3aa43e2464b7e98d43837078167d08359ab9deb2a56c6a39b6838f52093dcbb7-merged.mount: Deactivated successfully.
Nov 24 18:21:48 compute-0 podman[100406]: 2025-11-24 18:21:48.366482432 +0000 UTC m=+0.062169695 container remove 7c4ad23304836a36bdf83f720051f6be07e0036addd639f9ec1475b5ecfb4167 (image=quay.io/ceph/ceph:v18, name=happy_volhard, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 24 18:21:48 compute-0 systemd[1]: libpod-conmon-7c4ad23304836a36bdf83f720051f6be07e0036addd639f9ec1475b5ecfb4167.scope: Deactivated successfully.
Nov 24 18:21:48 compute-0 ansible-async_wrapper.py[100324]: Module complete (100324)
Nov 24 18:21:48 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v104: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:48 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Nov 24 18:21:48 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Nov 24 18:21:48 compute-0 sudo[100493]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obdygbbdqbggtvtageuukwwredxrampi ; /usr/bin/python3'
Nov 24 18:21:48 compute-0 sudo[100493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:21:48 compute-0 frosty_mccarthy[100374]: {
Nov 24 18:21:48 compute-0 frosty_mccarthy[100374]:     "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 18:21:48 compute-0 frosty_mccarthy[100374]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:21:48 compute-0 frosty_mccarthy[100374]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 18:21:48 compute-0 frosty_mccarthy[100374]:         "osd_id": 0,
Nov 24 18:21:48 compute-0 frosty_mccarthy[100374]:         "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:21:48 compute-0 frosty_mccarthy[100374]:         "type": "bluestore"
Nov 24 18:21:48 compute-0 frosty_mccarthy[100374]:     },
Nov 24 18:21:48 compute-0 frosty_mccarthy[100374]:     "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 18:21:48 compute-0 frosty_mccarthy[100374]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:21:48 compute-0 frosty_mccarthy[100374]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 18:21:48 compute-0 frosty_mccarthy[100374]:         "osd_id": 1,
Nov 24 18:21:48 compute-0 frosty_mccarthy[100374]:         "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:21:48 compute-0 frosty_mccarthy[100374]:         "type": "bluestore"
Nov 24 18:21:48 compute-0 frosty_mccarthy[100374]:     },
Nov 24 18:21:48 compute-0 frosty_mccarthy[100374]:     "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 18:21:48 compute-0 frosty_mccarthy[100374]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:21:48 compute-0 frosty_mccarthy[100374]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 18:21:48 compute-0 frosty_mccarthy[100374]:         "osd_id": 2,
Nov 24 18:21:48 compute-0 frosty_mccarthy[100374]:         "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:21:48 compute-0 frosty_mccarthy[100374]:         "type": "bluestore"
Nov 24 18:21:48 compute-0 frosty_mccarthy[100374]:     }
Nov 24 18:21:48 compute-0 frosty_mccarthy[100374]: }
Nov 24 18:21:48 compute-0 systemd[1]: libpod-de926f0f5d83a5d743458f93c5bdd7eb8c8b61c2b1d55c7652d41f2c29b8d317.scope: Deactivated successfully.
Nov 24 18:21:48 compute-0 podman[100344]: 2025-11-24 18:21:48.652167285 +0000 UTC m=+1.069394825 container died de926f0f5d83a5d743458f93c5bdd7eb8c8b61c2b1d55c7652d41f2c29b8d317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 18:21:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-5464985d7741a148a99a1b21b70c0bae3d0306d069c4f78217f62073ded689d8-merged.mount: Deactivated successfully.
Nov 24 18:21:48 compute-0 podman[100344]: 2025-11-24 18:21:48.717724163 +0000 UTC m=+1.134951703 container remove de926f0f5d83a5d743458f93c5bdd7eb8c8b61c2b1d55c7652d41f2c29b8d317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mccarthy, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:21:48 compute-0 systemd[1]: libpod-conmon-de926f0f5d83a5d743458f93c5bdd7eb8c8b61c2b1d55c7652d41f2c29b8d317.scope: Deactivated successfully.
Nov 24 18:21:48 compute-0 sudo[100112]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:48 compute-0 python3[100497]: ansible-ansible.legacy.async_status Invoked with jid=j78697275907.100296 mode=status _async_dir=/root/.ansible_async
Nov 24 18:21:48 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:21:48 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:48 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:21:48 compute-0 sudo[100493]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:48 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:48 compute-0 ceph-mgr[75218]: [progress INFO root] update: starting ev 78a7e1ea-35cf-45e0-ae4c-ab983836de74 (Updating rgw.rgw deployment (+1 -> 1))
Nov 24 18:21:48 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.pecquu", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Nov 24 18:21:48 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.pecquu", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 24 18:21:48 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.pecquu", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 24 18:21:48 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Nov 24 18:21:48 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:48 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:21:48 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:21:48 compute-0 ceph-mgr[75218]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.pecquu on compute-0
Nov 24 18:21:48 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.pecquu on compute-0
Nov 24 18:21:48 compute-0 sudo[100509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:48 compute-0 sudo[100509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:48 compute-0 sudo[100509]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:48 compute-0 sudo[100601]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpyhrlacsptjcqdccrlvheybsuxokgsa ; /usr/bin/python3'
Nov 24 18:21:48 compute-0 sudo[100601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:21:48 compute-0 sudo[100562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:21:48 compute-0 sudo[100562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:48 compute-0 sudo[100562]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:48 compute-0 sudo[100608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:48 compute-0 sudo[100608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:48 compute-0 sudo[100608]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:48 compute-0 sudo[100633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d
Nov 24 18:21:48 compute-0 sudo[100633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:49 compute-0 python3[100605]: ansible-ansible.legacy.async_status Invoked with jid=j78697275907.100296 mode=cleanup _async_dir=/root/.ansible_async
Nov 24 18:21:49 compute-0 sudo[100601]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:49 compute-0 podman[100698]: 2025-11-24 18:21:49.255163238 +0000 UTC m=+0.032818886 container create 445ea5b96657481ee98bae2f5bb56876a11b91de3974161e166d4aea21d162ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mclean, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:21:49 compute-0 systemd[1]: Started libpod-conmon-445ea5b96657481ee98bae2f5bb56876a11b91de3974161e166d4aea21d162ad.scope.
Nov 24 18:21:49 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:49 compute-0 podman[100698]: 2025-11-24 18:21:49.325562326 +0000 UTC m=+0.103217974 container init 445ea5b96657481ee98bae2f5bb56876a11b91de3974161e166d4aea21d162ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:21:49 compute-0 podman[100698]: 2025-11-24 18:21:49.333933884 +0000 UTC m=+0.111589532 container start 445ea5b96657481ee98bae2f5bb56876a11b91de3974161e166d4aea21d162ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:21:49 compute-0 podman[100698]: 2025-11-24 18:21:49.336755914 +0000 UTC m=+0.114411592 container attach 445ea5b96657481ee98bae2f5bb56876a11b91de3974161e166d4aea21d162ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 24 18:21:49 compute-0 podman[100698]: 2025-11-24 18:21:49.239982791 +0000 UTC m=+0.017638459 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:21:49 compute-0 serene_mclean[100714]: 167 167
Nov 24 18:21:49 compute-0 systemd[1]: libpod-445ea5b96657481ee98bae2f5bb56876a11b91de3974161e166d4aea21d162ad.scope: Deactivated successfully.
Nov 24 18:21:49 compute-0 podman[100698]: 2025-11-24 18:21:49.337961144 +0000 UTC m=+0.115616792 container died 445ea5b96657481ee98bae2f5bb56876a11b91de3974161e166d4aea21d162ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 24 18:21:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e90de0b54ef416e72bf5be05ca61a28c800c06f88efacedd5b6eb6edc77116c-merged.mount: Deactivated successfully.
Nov 24 18:21:49 compute-0 podman[100698]: 2025-11-24 18:21:49.368518773 +0000 UTC m=+0.146174421 container remove 445ea5b96657481ee98bae2f5bb56876a11b91de3974161e166d4aea21d162ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 24 18:21:49 compute-0 systemd[1]: libpod-conmon-445ea5b96657481ee98bae2f5bb56876a11b91de3974161e166d4aea21d162ad.scope: Deactivated successfully.
Nov 24 18:21:49 compute-0 systemd[1]: Reloading.
Nov 24 18:21:49 compute-0 systemd-sysv-generator[100782]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:21:49 compute-0 systemd-rc-local-generator[100779]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:21:49 compute-0 sudo[100759]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnxbxpemuhnrxjksgykbdqjnaslyixzc ; /usr/bin/python3'
Nov 24 18:21:49 compute-0 sudo[100759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:21:49 compute-0 systemd[1]: Reloading.
Nov 24 18:21:49 compute-0 systemd-rc-local-generator[100829]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:21:49 compute-0 ceph-mon[74927]: from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 18:21:49 compute-0 ceph-mon[74927]: pgmap v104: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:49 compute-0 ceph-mon[74927]: 3.4 scrub starts
Nov 24 18:21:49 compute-0 ceph-mon[74927]: 3.4 scrub ok
Nov 24 18:21:49 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:49 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:49 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.pecquu", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 24 18:21:49 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.pecquu", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 24 18:21:49 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:49 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:21:49 compute-0 ceph-mon[74927]: Deploying daemon rgw.rgw.compute-0.pecquu on compute-0
Nov 24 18:21:49 compute-0 systemd-sysv-generator[100832]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:21:49 compute-0 python3[100797]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:21:49 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.3 deep-scrub starts
Nov 24 18:21:49 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.3 deep-scrub ok
Nov 24 18:21:49 compute-0 podman[100836]: 2025-11-24 18:21:49.905934916 +0000 UTC m=+0.073348152 container create 35d6dcb2c6351cf287f99811ac223bed71e7a31897f8a0cfb3c659fae3efda13 (image=quay.io/ceph/ceph:v18, name=suspicious_turing, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:21:49 compute-0 systemd[1]: Started libpod-conmon-35d6dcb2c6351cf287f99811ac223bed71e7a31897f8a0cfb3c659fae3efda13.scope.
Nov 24 18:21:49 compute-0 podman[100836]: 2025-11-24 18:21:49.880195037 +0000 UTC m=+0.047608363 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:21:49 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.pecquu for e5ee928f-099b-569b-93c9-ecf025cbb50d...
Nov 24 18:21:49 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1c5f4dab92447616223da4eef716e8594c40d29920d4d6e5a79a69adc4d3220/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1c5f4dab92447616223da4eef716e8594c40d29920d4d6e5a79a69adc4d3220/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:50 compute-0 podman[100836]: 2025-11-24 18:21:50.001867048 +0000 UTC m=+0.169280304 container init 35d6dcb2c6351cf287f99811ac223bed71e7a31897f8a0cfb3c659fae3efda13 (image=quay.io/ceph/ceph:v18, name=suspicious_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 24 18:21:50 compute-0 podman[100836]: 2025-11-24 18:21:50.011507558 +0000 UTC m=+0.178920784 container start 35d6dcb2c6351cf287f99811ac223bed71e7a31897f8a0cfb3c659fae3efda13 (image=quay.io/ceph/ceph:v18, name=suspicious_turing, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:21:50 compute-0 podman[100836]: 2025-11-24 18:21:50.014697097 +0000 UTC m=+0.182110353 container attach 35d6dcb2c6351cf287f99811ac223bed71e7a31897f8a0cfb3c659fae3efda13 (image=quay.io/ceph/ceph:v18, name=suspicious_turing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 18:21:50 compute-0 podman[100903]: 2025-11-24 18:21:50.195796164 +0000 UTC m=+0.042999959 container create 1905e5a7aafe9faa6120b9302738e1e90e777a8e8f941c8c0f2564ce6b43ff73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-rgw-rgw-compute-0-pecquu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 24 18:21:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32568201311f79804fc9dfcbfcdf0d65b67bcbe2fae68cc844380ade5762c297/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32568201311f79804fc9dfcbfcdf0d65b67bcbe2fae68cc844380ade5762c297/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32568201311f79804fc9dfcbfcdf0d65b67bcbe2fae68cc844380ade5762c297/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32568201311f79804fc9dfcbfcdf0d65b67bcbe2fae68cc844380ade5762c297/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.pecquu supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:50 compute-0 podman[100903]: 2025-11-24 18:21:50.258638214 +0000 UTC m=+0.105842029 container init 1905e5a7aafe9faa6120b9302738e1e90e777a8e8f941c8c0f2564ce6b43ff73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-rgw-rgw-compute-0-pecquu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:21:50 compute-0 podman[100903]: 2025-11-24 18:21:50.262862229 +0000 UTC m=+0.110066014 container start 1905e5a7aafe9faa6120b9302738e1e90e777a8e8f941c8c0f2564ce6b43ff73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-rgw-rgw-compute-0-pecquu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:21:50 compute-0 bash[100903]: 1905e5a7aafe9faa6120b9302738e1e90e777a8e8f941c8c0f2564ce6b43ff73
Nov 24 18:21:50 compute-0 podman[100903]: 2025-11-24 18:21:50.178637008 +0000 UTC m=+0.025840803 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:21:50 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.pecquu for e5ee928f-099b-569b-93c9-ecf025cbb50d.
Nov 24 18:21:50 compute-0 sudo[100633]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:50 compute-0 radosgw[100923]: deferred set uid:gid to 167:167 (ceph:ceph)
Nov 24 18:21:50 compute-0 radosgw[100923]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Nov 24 18:21:50 compute-0 radosgw[100923]: framework: beast
Nov 24 18:21:50 compute-0 radosgw[100923]: framework conf key: endpoint, val: 192.168.122.100:8082
Nov 24 18:21:50 compute-0 radosgw[100923]: init_numa not setting numa affinity
Nov 24 18:21:50 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:21:50 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:50 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:21:50 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:50 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 24 18:21:50 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:50 compute-0 ceph-mgr[75218]: [progress INFO root] complete: finished ev 78a7e1ea-35cf-45e0-ae4c-ab983836de74 (Updating rgw.rgw deployment (+1 -> 1))
Nov 24 18:21:50 compute-0 ceph-mgr[75218]: [progress INFO root] Completed event 78a7e1ea-35cf-45e0-ae4c-ab983836de74 (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Nov 24 18:21:50 compute-0 ceph-mgr[75218]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Nov 24 18:21:50 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Nov 24 18:21:50 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 24 18:21:50 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:50 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 24 18:21:50 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:50 compute-0 ceph-mgr[75218]: [progress INFO root] update: starting ev a0179e9b-54c5-4ae3-8845-23c41a2c2c19 (Updating mds.cephfs deployment (+1 -> 1))
Nov 24 18:21:50 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.apnhwb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Nov 24 18:21:50 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.apnhwb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 24 18:21:50 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.apnhwb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 24 18:21:50 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:21:50 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:21:50 compute-0 ceph-mgr[75218]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.apnhwb on compute-0
Nov 24 18:21:50 compute-0 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.apnhwb on compute-0
Nov 24 18:21:50 compute-0 sudo[101004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:50 compute-0 sudo[101004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:50 compute-0 sudo[101004]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:50 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v105: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:50 compute-0 sudo[101029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:21:50 compute-0 sudo[101029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:50 compute-0 sudo[101029]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:50 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14265 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 18:21:50 compute-0 suspicious_turing[100854]: 
Nov 24 18:21:50 compute-0 suspicious_turing[100854]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 24 18:21:50 compute-0 sudo[101054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:50 compute-0 sudo[101054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:50 compute-0 sudo[101054]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:50 compute-0 systemd[1]: libpod-35d6dcb2c6351cf287f99811ac223bed71e7a31897f8a0cfb3c659fae3efda13.scope: Deactivated successfully.
Nov 24 18:21:50 compute-0 podman[100836]: 2025-11-24 18:21:50.581408619 +0000 UTC m=+0.748821845 container died 35d6dcb2c6351cf287f99811ac223bed71e7a31897f8a0cfb3c659fae3efda13 (image=quay.io/ceph/ceph:v18, name=suspicious_turing, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:21:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1c5f4dab92447616223da4eef716e8594c40d29920d4d6e5a79a69adc4d3220-merged.mount: Deactivated successfully.
Nov 24 18:21:50 compute-0 podman[100836]: 2025-11-24 18:21:50.622321265 +0000 UTC m=+0.789734491 container remove 35d6dcb2c6351cf287f99811ac223bed71e7a31897f8a0cfb3c659fae3efda13 (image=quay.io/ceph/ceph:v18, name=suspicious_turing, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:21:50 compute-0 sudo[101082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d
Nov 24 18:21:50 compute-0 systemd[1]: libpod-conmon-35d6dcb2c6351cf287f99811ac223bed71e7a31897f8a0cfb3c659fae3efda13.scope: Deactivated successfully.
Nov 24 18:21:50 compute-0 sudo[101082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:50 compute-0 sudo[100759]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:50 compute-0 ceph-mon[74927]: 4.3 deep-scrub starts
Nov 24 18:21:50 compute-0 ceph-mon[74927]: 4.3 deep-scrub ok
Nov 24 18:21:50 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:50 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:50 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:50 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:50 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:50 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.apnhwb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 24 18:21:50 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.apnhwb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 24 18:21:50 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:21:50 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Nov 24 18:21:50 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Nov 24 18:21:51 compute-0 podman[101156]: 2025-11-24 18:21:51.024801538 +0000 UTC m=+0.059949729 container create c41ede5d63c43376f8db1e2032195f1508dfec642cb9d7f6f41c0c15a417e93f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:21:51 compute-0 systemd[1]: Started libpod-conmon-c41ede5d63c43376f8db1e2032195f1508dfec642cb9d7f6f41c0c15a417e93f.scope.
Nov 24 18:21:51 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:51 compute-0 podman[101156]: 2025-11-24 18:21:50.998077865 +0000 UTC m=+0.033226146 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:21:51 compute-0 podman[101156]: 2025-11-24 18:21:51.102919818 +0000 UTC m=+0.138068009 container init c41ede5d63c43376f8db1e2032195f1508dfec642cb9d7f6f41c0c15a417e93f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_almeida, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Nov 24 18:21:51 compute-0 podman[101156]: 2025-11-24 18:21:51.109284966 +0000 UTC m=+0.144433157 container start c41ede5d63c43376f8db1e2032195f1508dfec642cb9d7f6f41c0c15a417e93f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:21:51 compute-0 podman[101156]: 2025-11-24 18:21:51.113201814 +0000 UTC m=+0.148350005 container attach c41ede5d63c43376f8db1e2032195f1508dfec642cb9d7f6f41c0c15a417e93f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_almeida, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 24 18:21:51 compute-0 xenodochial_almeida[101173]: 167 167
Nov 24 18:21:51 compute-0 systemd[1]: libpod-c41ede5d63c43376f8db1e2032195f1508dfec642cb9d7f6f41c0c15a417e93f.scope: Deactivated successfully.
Nov 24 18:21:51 compute-0 podman[101156]: 2025-11-24 18:21:51.117686915 +0000 UTC m=+0.152835116 container died c41ede5d63c43376f8db1e2032195f1508dfec642cb9d7f6f41c0c15a417e93f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_almeida, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:21:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8f6fb1bd22ae4968c924723e89c9ab8094c4b4d99b4e0c761b5b56f6f44f2d5-merged.mount: Deactivated successfully.
Nov 24 18:21:51 compute-0 podman[101156]: 2025-11-24 18:21:51.15414874 +0000 UTC m=+0.189296931 container remove c41ede5d63c43376f8db1e2032195f1508dfec642cb9d7f6f41c0c15a417e93f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 18:21:51 compute-0 systemd[1]: libpod-conmon-c41ede5d63c43376f8db1e2032195f1508dfec642cb9d7f6f41c0c15a417e93f.scope: Deactivated successfully.
Nov 24 18:21:51 compute-0 systemd[1]: Reloading.
Nov 24 18:21:51 compute-0 systemd-sysv-generator[101221]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:21:51 compute-0 systemd-rc-local-generator[101217]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:21:51 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Nov 24 18:21:51 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Nov 24 18:21:51 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Nov 24 18:21:51 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Nov 24 18:21:51 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2803639548' entity='client.rgw.rgw.compute-0.pecquu' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 24 18:21:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 47 pg[8.0( empty local-lis/les=0/0 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [1] r=0 lpr=47 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:51 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.b scrub starts
Nov 24 18:21:51 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.b scrub ok
Nov 24 18:21:51 compute-0 sudo[101250]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adhvugbxlmhahfzfrfsjdhaykscjowlr ; /usr/bin/python3'
Nov 24 18:21:51 compute-0 sudo[101250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:21:51 compute-0 systemd[1]: Reloading.
Nov 24 18:21:51 compute-0 systemd-sysv-generator[101289]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:21:51 compute-0 systemd-rc-local-generator[101285]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:21:51 compute-0 python3[101254]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:21:51 compute-0 podman[101293]: 2025-11-24 18:21:51.637197035 +0000 UTC m=+0.034756654 container create 07e5d1071f85e00cdb0cb865a3d3855c3a701ba2ffce2bcd214fa8654eaa832b (image=quay.io/ceph/ceph:v18, name=hardcore_curran, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:21:51 compute-0 podman[101293]: 2025-11-24 18:21:51.623438393 +0000 UTC m=+0.020998032 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:21:51 compute-0 systemd[1]: Started libpod-conmon-07e5d1071f85e00cdb0cb865a3d3855c3a701ba2ffce2bcd214fa8654eaa832b.scope.
Nov 24 18:21:51 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.apnhwb for e5ee928f-099b-569b-93c9-ecf025cbb50d...
Nov 24 18:21:51 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90fba42c68b75cea4d14c3e8dd5d6656c7f557e53a24f33cfb345892dfe739be/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90fba42c68b75cea4d14c3e8dd5d6656c7f557e53a24f33cfb345892dfe739be/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:51 compute-0 ceph-mon[74927]: Saving service rgw.rgw spec with placement compute-0
Nov 24 18:21:51 compute-0 ceph-mon[74927]: Deploying daemon mds.cephfs.compute-0.apnhwb on compute-0
Nov 24 18:21:51 compute-0 ceph-mon[74927]: pgmap v105: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:51 compute-0 ceph-mon[74927]: from='client.14265 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 18:21:51 compute-0 ceph-mon[74927]: 4.6 scrub starts
Nov 24 18:21:51 compute-0 ceph-mon[74927]: 4.6 scrub ok
Nov 24 18:21:51 compute-0 ceph-mon[74927]: osdmap e47: 3 total, 3 up, 3 in
Nov 24 18:21:51 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2803639548' entity='client.rgw.rgw.compute-0.pecquu' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 24 18:21:51 compute-0 ceph-mon[74927]: 3.b scrub starts
Nov 24 18:21:51 compute-0 ceph-mon[74927]: 3.b scrub ok
Nov 24 18:21:51 compute-0 podman[101293]: 2025-11-24 18:21:51.785370364 +0000 UTC m=+0.182930003 container init 07e5d1071f85e00cdb0cb865a3d3855c3a701ba2ffce2bcd214fa8654eaa832b (image=quay.io/ceph/ceph:v18, name=hardcore_curran, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 18:21:51 compute-0 podman[101293]: 2025-11-24 18:21:51.798069399 +0000 UTC m=+0.195629018 container start 07e5d1071f85e00cdb0cb865a3d3855c3a701ba2ffce2bcd214fa8654eaa832b (image=quay.io/ceph/ceph:v18, name=hardcore_curran, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 24 18:21:51 compute-0 podman[101293]: 2025-11-24 18:21:51.802775146 +0000 UTC m=+0.200334765 container attach 07e5d1071f85e00cdb0cb865a3d3855c3a701ba2ffce2bcd214fa8654eaa832b (image=quay.io/ceph/ceph:v18, name=hardcore_curran, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 18:21:51 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.b scrub starts
Nov 24 18:21:51 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.b scrub ok
Nov 24 18:21:51 compute-0 podman[101360]: 2025-11-24 18:21:51.973959607 +0000 UTC m=+0.047442129 container create f8af585414f5203083e73145075b9783ac13a27d57e1366f7b97b40576de60b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mds-cephfs-compute-0-apnhwb, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40331e92f3b1f6298dd0ccff9a97cd00b7ee4478cf53a681ed64faafaa1286ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40331e92f3b1f6298dd0ccff9a97cd00b7ee4478cf53a681ed64faafaa1286ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40331e92f3b1f6298dd0ccff9a97cd00b7ee4478cf53a681ed64faafaa1286ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40331e92f3b1f6298dd0ccff9a97cd00b7ee4478cf53a681ed64faafaa1286ac/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.apnhwb supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:52 compute-0 podman[101360]: 2025-11-24 18:21:51.944394383 +0000 UTC m=+0.017876925 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:21:52 compute-0 podman[101360]: 2025-11-24 18:21:52.043615126 +0000 UTC m=+0.117097678 container init f8af585414f5203083e73145075b9783ac13a27d57e1366f7b97b40576de60b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mds-cephfs-compute-0-apnhwb, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:21:52 compute-0 podman[101360]: 2025-11-24 18:21:52.048155329 +0000 UTC m=+0.121637861 container start f8af585414f5203083e73145075b9783ac13a27d57e1366f7b97b40576de60b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mds-cephfs-compute-0-apnhwb, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 18:21:52 compute-0 bash[101360]: f8af585414f5203083e73145075b9783ac13a27d57e1366f7b97b40576de60b1
Nov 24 18:21:52 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.apnhwb for e5ee928f-099b-569b-93c9-ecf025cbb50d.
Nov 24 18:21:52 compute-0 sudo[101082]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:21:52 compute-0 ceph-mds[101380]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 18:21:52 compute-0 ceph-mds[101380]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Nov 24 18:21:52 compute-0 ceph-mds[101380]: main not setting numa affinity
Nov 24 18:21:52 compute-0 ceph-mds[101380]: pidfile_write: ignore empty --pid-file
Nov 24 18:21:52 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mds-cephfs-compute-0-apnhwb[101376]: starting mds.cephfs.compute-0.apnhwb at 
Nov 24 18:21:52 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:21:52 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:52 compute-0 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb Updating MDS map to version 2 from mon.0
Nov 24 18:21:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 24 18:21:52 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:52 compute-0 ceph-mgr[75218]: [progress INFO root] complete: finished ev a0179e9b-54c5-4ae3-8845-23c41a2c2c19 (Updating mds.cephfs deployment (+1 -> 1))
Nov 24 18:21:52 compute-0 ceph-mgr[75218]: [progress INFO root] Completed event a0179e9b-54c5-4ae3-8845-23c41a2c2c19 (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Nov 24 18:21:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Nov 24 18:21:52 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 24 18:21:52 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:52 compute-0 sudo[101418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:52 compute-0 sudo[101418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:52 compute-0 sudo[101418]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:52 compute-0 sudo[101443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:21:52 compute-0 sudo[101443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:52 compute-0 sudo[101443]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:52 compute-0 sudo[101468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:52 compute-0 sudo[101468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:52 compute-0 sudo[101468]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Nov 24 18:21:52 compute-0 sudo[101493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:21:52 compute-0 sudo[101493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:52 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2803639548' entity='client.rgw.rgw.compute-0.pecquu' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 24 18:21:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Nov 24 18:21:52 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Nov 24 18:21:52 compute-0 sudo[101493]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 48 pg[8.0( empty local-lis/les=47/48 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [1] r=0 lpr=47 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:52 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14269 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 18:21:52 compute-0 hardcore_curran[101310]: 
Nov 24 18:21:52 compute-0 hardcore_curran[101310]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Nov 24 18:21:52 compute-0 sudo[101518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:52 compute-0 systemd[1]: libpod-07e5d1071f85e00cdb0cb865a3d3855c3a701ba2ffce2bcd214fa8654eaa832b.scope: Deactivated successfully.
Nov 24 18:21:52 compute-0 sudo[101518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:52 compute-0 conmon[101310]: conmon 07e5d1071f85e00cdb0c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-07e5d1071f85e00cdb0cb865a3d3855c3a701ba2ffce2bcd214fa8654eaa832b.scope/container/memory.events
Nov 24 18:21:52 compute-0 podman[101293]: 2025-11-24 18:21:52.388941301 +0000 UTC m=+0.786500930 container died 07e5d1071f85e00cdb0cb865a3d3855c3a701ba2ffce2bcd214fa8654eaa832b (image=quay.io/ceph/ceph:v18, name=hardcore_curran, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:21:52 compute-0 sudo[101518]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-90fba42c68b75cea4d14c3e8dd5d6656c7f557e53a24f33cfb345892dfe739be-merged.mount: Deactivated successfully.
Nov 24 18:21:52 compute-0 podman[101293]: 2025-11-24 18:21:52.431110628 +0000 UTC m=+0.828670247 container remove 07e5d1071f85e00cdb0cb865a3d3855c3a701ba2ffce2bcd214fa8654eaa832b (image=quay.io/ceph/ceph:v18, name=hardcore_curran, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 18:21:52 compute-0 systemd[1]: libpod-conmon-07e5d1071f85e00cdb0cb865a3d3855c3a701ba2ffce2bcd214fa8654eaa832b.scope: Deactivated successfully.
Nov 24 18:21:52 compute-0 sudo[101250]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:52 compute-0 sudo[101548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 24 18:21:52 compute-0 sudo[101548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:52 compute-0 ansible-async_wrapper.py[100322]: Done in kid B.
Nov 24 18:21:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).mds e2 assigned standby [v2:192.168.122.100:6814/2272409054,v1:192.168.122.100:6815/2272409054] as mds.0
Nov 24 18:21:52 compute-0 ceph-mon[74927]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.apnhwb assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 24 18:21:52 compute-0 ceph-mon[74927]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 24 18:21:52 compute-0 ceph-mon[74927]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 24 18:21:52 compute-0 ceph-mon[74927]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 24 18:21:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:21:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).mds e3 new map
Nov 24 18:21:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        3
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-24T18:21:38.431267+0000
                                           modified        2025-11-24T18:21:52.477258+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=14267}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-0.apnhwb{0:14267} state up:creating seq 1 addr [v2:192.168.122.100:6814/2272409054,v1:192.168.122.100:6815/2272409054] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Nov 24 18:21:52 compute-0 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb Updating MDS map to version 3 from mon.0
Nov 24 18:21:52 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v108: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:52 compute-0 ceph-mds[101380]: mds.0.3 handle_mds_map i am now mds.0.3
Nov 24 18:21:52 compute-0 ceph-mds[101380]: mds.0.3 handle_mds_map state change up:standby --> up:creating
Nov 24 18:21:52 compute-0 ceph-mds[101380]: mds.0.cache creating system inode with ino:0x1
Nov 24 18:21:52 compute-0 ceph-mds[101380]: mds.0.cache creating system inode with ino:0x100
Nov 24 18:21:52 compute-0 ceph-mds[101380]: mds.0.cache creating system inode with ino:0x600
Nov 24 18:21:52 compute-0 ceph-mds[101380]: mds.0.cache creating system inode with ino:0x601
Nov 24 18:21:52 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2272409054,v1:192.168.122.100:6815/2272409054] up:boot
Nov 24 18:21:52 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.apnhwb=up:creating}
Nov 24 18:21:52 compute-0 ceph-mds[101380]: mds.0.cache creating system inode with ino:0x602
Nov 24 18:21:52 compute-0 ceph-mds[101380]: mds.0.cache creating system inode with ino:0x603
Nov 24 18:21:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.apnhwb"} v 0) v1
Nov 24 18:21:52 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.apnhwb"}]: dispatch
Nov 24 18:21:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).mds e3 all = 0
Nov 24 18:21:52 compute-0 ceph-mds[101380]: mds.0.cache creating system inode with ino:0x604
Nov 24 18:21:52 compute-0 ceph-mds[101380]: mds.0.cache creating system inode with ino:0x605
Nov 24 18:21:52 compute-0 ceph-mds[101380]: mds.0.cache creating system inode with ino:0x606
Nov 24 18:21:52 compute-0 ceph-mds[101380]: mds.0.cache creating system inode with ino:0x607
Nov 24 18:21:52 compute-0 ceph-mds[101380]: mds.0.cache creating system inode with ino:0x608
Nov 24 18:21:52 compute-0 ceph-mds[101380]: mds.0.cache creating system inode with ino:0x609
Nov 24 18:21:52 compute-0 ceph-mds[101380]: mds.0.3 creating_done
Nov 24 18:21:52 compute-0 ceph-mon[74927]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.apnhwb is now active in filesystem cephfs as rank 0
Nov 24 18:21:52 compute-0 ceph-mon[74927]: 4.b scrub starts
Nov 24 18:21:52 compute-0 ceph-mon[74927]: 4.b scrub ok
Nov 24 18:21:52 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:52 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:52 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:52 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:52 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:52 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2803639548' entity='client.rgw.rgw.compute-0.pecquu' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 24 18:21:52 compute-0 ceph-mon[74927]: osdmap e48: 3 total, 3 up, 3 in
Nov 24 18:21:52 compute-0 ceph-mon[74927]: daemon mds.cephfs.compute-0.apnhwb assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 24 18:21:52 compute-0 ceph-mon[74927]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 24 18:21:52 compute-0 ceph-mon[74927]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 24 18:21:52 compute-0 ceph-mon[74927]: Cluster is now healthy
Nov 24 18:21:52 compute-0 ceph-mon[74927]: mds.? [v2:192.168.122.100:6814/2272409054,v1:192.168.122.100:6815/2272409054] up:boot
Nov 24 18:21:52 compute-0 ceph-mon[74927]: fsmap cephfs:1 {0=cephfs.compute-0.apnhwb=up:creating}
Nov 24 18:21:52 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.apnhwb"}]: dispatch
Nov 24 18:21:52 compute-0 ceph-mon[74927]: daemon mds.cephfs.compute-0.apnhwb is now active in filesystem cephfs as rank 0
Nov 24 18:21:52 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.c scrub starts
Nov 24 18:21:52 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.c scrub ok
Nov 24 18:21:52 compute-0 podman[101664]: 2025-11-24 18:21:52.862315325 +0000 UTC m=+0.061226931 container exec 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:21:52 compute-0 podman[101664]: 2025-11-24 18:21:52.955182601 +0000 UTC m=+0.154094097 container exec_died 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:21:53 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Nov 24 18:21:53 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Nov 24 18:21:53 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Nov 24 18:21:53 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Nov 24 18:21:53 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Nov 24 18:21:53 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Nov 24 18:21:53 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2803639548' entity='client.rgw.rgw.compute-0.pecquu' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 24 18:21:53 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 49 pg[9.0( empty local-lis/les=0/0 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [1] r=0 lpr=49 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:53 compute-0 sudo[101803]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oworslkzseprbxistnqmvkablvcgkdll ; /usr/bin/python3'
Nov 24 18:21:53 compute-0 sudo[101803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:21:53 compute-0 python3[101812]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:21:53 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).mds e4 new map
Nov 24 18:21:53 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-24T18:21:38.431267+0000
                                           modified        2025-11-24T18:21:53.481295+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=14267}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-0.apnhwb{0:14267} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/2272409054,v1:192.168.122.100:6815/2272409054] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Nov 24 18:21:53 compute-0 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb Updating MDS map to version 4 from mon.0
Nov 24 18:21:53 compute-0 ceph-mds[101380]: mds.0.3 handle_mds_map i am now mds.0.3
Nov 24 18:21:53 compute-0 ceph-mds[101380]: mds.0.3 handle_mds_map state change up:creating --> up:active
Nov 24 18:21:53 compute-0 ceph-mds[101380]: mds.0.3 recovery_done -- successful recovery!
Nov 24 18:21:53 compute-0 ceph-mds[101380]: mds.0.3 active_start
Nov 24 18:21:53 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2272409054,v1:192.168.122.100:6815/2272409054] up:active
Nov 24 18:21:53 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.apnhwb=up:active}
Nov 24 18:21:53 compute-0 sudo[101548]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:53 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:21:53 compute-0 podman[101847]: 2025-11-24 18:21:53.522853396 +0000 UTC m=+0.040488316 container create f30730c0037c3adb5ceedca69430eeb296ee8829772df05b8a69a655d516061d (image=quay.io/ceph/ceph:v18, name=relaxed_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 24 18:21:53 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:53 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:21:53 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:53 compute-0 systemd[1]: Started libpod-conmon-f30730c0037c3adb5ceedca69430eeb296ee8829772df05b8a69a655d516061d.scope.
Nov 24 18:21:53 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:53 compute-0 sudo[101865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfa460109a9715a68b41f88925b042334a341d03b970933786e6bd688b1a16f6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfa460109a9715a68b41f88925b042334a341d03b970933786e6bd688b1a16f6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:53 compute-0 sudo[101865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:53 compute-0 sudo[101865]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:53 compute-0 podman[101847]: 2025-11-24 18:21:53.590415143 +0000 UTC m=+0.108050083 container init f30730c0037c3adb5ceedca69430eeb296ee8829772df05b8a69a655d516061d (image=quay.io/ceph/ceph:v18, name=relaxed_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 18:21:53 compute-0 podman[101847]: 2025-11-24 18:21:53.596277689 +0000 UTC m=+0.113912609 container start f30730c0037c3adb5ceedca69430eeb296ee8829772df05b8a69a655d516061d (image=quay.io/ceph/ceph:v18, name=relaxed_carson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:21:53 compute-0 podman[101847]: 2025-11-24 18:21:53.599300874 +0000 UTC m=+0.116935824 container attach f30730c0037c3adb5ceedca69430eeb296ee8829772df05b8a69a655d516061d (image=quay.io/ceph/ceph:v18, name=relaxed_carson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:21:53 compute-0 podman[101847]: 2025-11-24 18:21:53.505276639 +0000 UTC m=+0.022911579 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:21:53 compute-0 sudo[101895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:21:53 compute-0 sudo[101895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:53 compute-0 sudo[101895]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:53 compute-0 sudo[101921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:53 compute-0 sudo[101921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:53 compute-0 sudo[101921]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:53 compute-0 sudo[101946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 18:21:53 compute-0 sudo[101946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:53 compute-0 ceph-mon[74927]: from='client.14269 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 18:21:53 compute-0 ceph-mon[74927]: pgmap v108: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:21:53 compute-0 ceph-mon[74927]: 4.c scrub starts
Nov 24 18:21:53 compute-0 ceph-mon[74927]: 4.c scrub ok
Nov 24 18:21:53 compute-0 ceph-mon[74927]: osdmap e49: 3 total, 3 up, 3 in
Nov 24 18:21:53 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2803639548' entity='client.rgw.rgw.compute-0.pecquu' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 24 18:21:53 compute-0 ceph-mon[74927]: mds.? [v2:192.168.122.100:6814/2272409054,v1:192.168.122.100:6815/2272409054] up:active
Nov 24 18:21:53 compute-0 ceph-mon[74927]: fsmap cephfs:1 {0=cephfs.compute-0.apnhwb=up:active}
Nov 24 18:21:53 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:53 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:53 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Nov 24 18:21:53 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Nov 24 18:21:54 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14271 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 18:21:54 compute-0 relaxed_carson[101885]: 
Nov 24 18:21:54 compute-0 relaxed_carson[101885]: [{"container_id": "cd3250af4db7", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.38%", "created": "2025-11-24T18:20:03.041398Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-11-24T18:20:03.115124Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-24T18:21:53.514299Z", "memory_usage": 11628707, "ports": [], "service_name": "crash", "started": "2025-11-24T18:20:02.696421Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d@crash.compute-0", "version": "18.2.7"}, {"container_id": "f8af585414f5", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "11.80%", "created": "2025-11-24T18:21:52.063154Z", "daemon_id": "cephfs.compute-0.apnhwb", "daemon_name": "mds.cephfs.compute-0.apnhwb", "daemon_type": "mds", "events": ["2025-11-24T18:21:52.111991Z daemon:mds.cephfs.compute-0.apnhwb [INFO] \"Deployed mds.cephfs.compute-0.apnhwb on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-24T18:21:53.514588Z", "memory_usage": 18171822, "ports": [], "service_name": "mds.cephfs", "started": "2025-11-24T18:21:51.951844Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d@mds.cephfs.compute-0.apnhwb", "version": "18.2.7"}, {"container_id": "9eef9f776910", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "26.19%", "created": "2025-11-24T18:18:49.722543Z", "daemon_id": "compute-0.dfqptp", "daemon_name": "mgr.compute-0.dfqptp", "daemon_type": "mgr", "events": ["2025-11-24T18:20:08.021719Z daemon:mgr.compute-0.dfqptp [INFO] \"Reconfigured mgr.compute-0.dfqptp on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-24T18:21:53.514239Z", "memory_usage": 552180121, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-11-24T18:18:49.616364Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d@mgr.compute-0.dfqptp", "version": "18.2.7"}, {"container_id": "6770cfc50a03", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "2.07%", "created": "2025-11-24T18:18:44.899424Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-11-24T18:20:07.331253Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-24T18:21:53.514163Z", "memory_request": 2147483648, "memory_usage": 40076574, "ports": [], "service_name": "mon", "started": "2025-11-24T18:18:47.332196Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d@mon.compute-0", "version": "18.2.7"}, {"container_id": "9c8b4f7ebd62", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.50%", "created": "2025-11-24T18:20:32.372150Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-11-24T18:20:32.438644Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-24T18:21:53.514358Z", "memory_request": 4294967296, "memory_usage": 66542632, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-24T18:20:32.244328Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d@osd.0", "version": "18.2.7"}, {"container_id": "edbd9c794ff6", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.58%", "created": "2025-11-24T18:20:38.427303Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-11-24T18:20:39.747438Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-24T18:21:53.514419Z", "memory_request": 4294967296, "memory_usage": 67119349, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-24T18:20:37.844555Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d@osd.1", "version": "18.2.7"}, {"container_id": "d4b4bd73407e", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.69%", "created": "2025-11-24T18:20:46.644510Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-11-24T18:20:46.798149Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-24T18:21:53.514474Z", "memory_request": 4294967296, "memory_usage": 65840087, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-24T18:20:45.786566Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d@osd.2", "version": "18.2.7"}, {"container_id": "1905e5a7aafe", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.76%", "created": "2025-11-24T18:21:50.280110Z", "daemon_id": "rgw.compute-0.pecquu", "daemon_name": "rgw.rgw.compute-0.pecquu", "daemon_type": "rgw", "events": ["2025-11-24T18:21:50.338453Z daemon:rgw.rgw.compute-0.pecquu [INFO] \"Deployed rgw.rgw.compute-0.pecquu on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "last_refresh": "2025-11-24T18:21:53.514531Z", "memory_usage": 18738053, "ports": [8082], "service_name": "rgw.rgw", "started": "2025-11-24T18:21:50.183076Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d@rgw.rgw.compute-0.pecquu", "version": "18.2.7"}]
Nov 24 18:21:54 compute-0 systemd[1]: libpod-f30730c0037c3adb5ceedca69430eeb296ee8829772df05b8a69a655d516061d.scope: Deactivated successfully.
Nov 24 18:21:54 compute-0 podman[101847]: 2025-11-24 18:21:54.14683859 +0000 UTC m=+0.664473550 container died f30730c0037c3adb5ceedca69430eeb296ee8829772df05b8a69a655d516061d (image=quay.io/ceph/ceph:v18, name=relaxed_carson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:21:54 compute-0 sudo[101946]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfa460109a9715a68b41f88925b042334a341d03b970933786e6bd688b1a16f6-merged.mount: Deactivated successfully.
Nov 24 18:21:54 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:21:54 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:21:54 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:21:54 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:21:54 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:21:54 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:54 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev ab8de85c-1473-4826-be33-9c2210e6893e does not exist
Nov 24 18:21:54 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev d6a4a54f-8841-42bb-a35c-2632edc0bc9c does not exist
Nov 24 18:21:54 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 94d59412-9245-4efe-a215-68e269c1efd3 does not exist
Nov 24 18:21:54 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 18:21:54 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:21:54 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 18:21:54 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:21:54 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:21:54 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:21:54 compute-0 podman[101847]: 2025-11-24 18:21:54.20322765 +0000 UTC m=+0.720862600 container remove f30730c0037c3adb5ceedca69430eeb296ee8829772df05b8a69a655d516061d (image=quay.io/ceph/ceph:v18, name=relaxed_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 24 18:21:54 compute-0 systemd[1]: libpod-conmon-f30730c0037c3adb5ceedca69430eeb296ee8829772df05b8a69a655d516061d.scope: Deactivated successfully.
Nov 24 18:21:54 compute-0 sudo[101803]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:54 compute-0 sudo[102036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:54 compute-0 sudo[102036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:54 compute-0 sudo[102036]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:54 compute-0 sudo[102061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:21:54 compute-0 sudo[102061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:54 compute-0 sudo[102061]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:54 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Nov 24 18:21:54 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2803639548' entity='client.rgw.rgw.compute-0.pecquu' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 24 18:21:54 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Nov 24 18:21:54 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Nov 24 18:21:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 50 pg[9.0( empty local-lis/les=49/50 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [1] r=0 lpr=49 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:54 compute-0 sudo[102086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:54 compute-0 sudo[102086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:54 compute-0 sudo[102086]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:54 compute-0 rsyslogd[1008]: message too long (8589) with configured size 8096, begin of message is: [{"container_id": "cd3250af4db7", "container_image_digests": ["quay.io/ceph/ceph [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 24 18:21:54 compute-0 sudo[102113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 18:21:54 compute-0 sudo[102113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:54 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v111: 195 pgs: 195 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 12 op/s
Nov 24 18:21:54 compute-0 podman[102175]: 2025-11-24 18:21:54.721485899 +0000 UTC m=+0.058802541 container create 059167fdcd3f248af4334b4bf7c02b4c94713b63bdba1139ff618e6a71ffb31c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hofstadter, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 24 18:21:54 compute-0 ceph-mgr[75218]: [progress INFO root] Writing back 11 completed events
Nov 24 18:21:54 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 24 18:21:54 compute-0 systemd[1]: Started libpod-conmon-059167fdcd3f248af4334b4bf7c02b4c94713b63bdba1139ff618e6a71ffb31c.scope.
Nov 24 18:21:54 compute-0 podman[102175]: 2025-11-24 18:21:54.683165207 +0000 UTC m=+0.020481869 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:21:54 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:54 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:54 compute-0 ceph-mon[74927]: 2.1 scrub starts
Nov 24 18:21:54 compute-0 ceph-mon[74927]: 2.1 scrub ok
Nov 24 18:21:54 compute-0 ceph-mon[74927]: 6.3 scrub starts
Nov 24 18:21:54 compute-0 ceph-mon[74927]: 6.3 scrub ok
Nov 24 18:21:54 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:21:54 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:21:54 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:54 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:21:54 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:21:54 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:21:54 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2803639548' entity='client.rgw.rgw.compute-0.pecquu' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 24 18:21:54 compute-0 ceph-mon[74927]: osdmap e50: 3 total, 3 up, 3 in
Nov 24 18:21:54 compute-0 podman[102175]: 2025-11-24 18:21:54.837086449 +0000 UTC m=+0.174403111 container init 059167fdcd3f248af4334b4bf7c02b4c94713b63bdba1139ff618e6a71ffb31c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hofstadter, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:21:54 compute-0 podman[102175]: 2025-11-24 18:21:54.844863422 +0000 UTC m=+0.182180064 container start 059167fdcd3f248af4334b4bf7c02b4c94713b63bdba1139ff618e6a71ffb31c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hofstadter, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:21:54 compute-0 condescending_hofstadter[102191]: 167 167
Nov 24 18:21:54 compute-0 systemd[1]: libpod-059167fdcd3f248af4334b4bf7c02b4c94713b63bdba1139ff618e6a71ffb31c.scope: Deactivated successfully.
Nov 24 18:21:54 compute-0 podman[102175]: 2025-11-24 18:21:54.853885656 +0000 UTC m=+0.191202318 container attach 059167fdcd3f248af4334b4bf7c02b4c94713b63bdba1139ff618e6a71ffb31c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hofstadter, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 24 18:21:54 compute-0 podman[102175]: 2025-11-24 18:21:54.854210304 +0000 UTC m=+0.191526946 container died 059167fdcd3f248af4334b4bf7c02b4c94713b63bdba1139ff618e6a71ffb31c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:21:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cb39837a79624d44159fb23d6647fd1301cfb28bdd11af6c554bbaeada41ad5-merged.mount: Deactivated successfully.
Nov 24 18:21:54 compute-0 podman[102175]: 2025-11-24 18:21:54.94224407 +0000 UTC m=+0.279560712 container remove 059167fdcd3f248af4334b4bf7c02b4c94713b63bdba1139ff618e6a71ffb31c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 18:21:54 compute-0 systemd[1]: libpod-conmon-059167fdcd3f248af4334b4bf7c02b4c94713b63bdba1139ff618e6a71ffb31c.scope: Deactivated successfully.
Nov 24 18:21:55 compute-0 sudo[102233]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nftayviimwqmtuxgquiktoruthypgmqy ; /usr/bin/python3'
Nov 24 18:21:55 compute-0 sudo[102233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:21:55 compute-0 podman[102241]: 2025-11-24 18:21:55.120466966 +0000 UTC m=+0.073682751 container create 660b87947cca8b3461e548ffe674be0aeee87bbf8056948cacfa63cd6b09243b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hofstadter, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:21:55 compute-0 python3[102235]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:21:55 compute-0 podman[102241]: 2025-11-24 18:21:55.068442314 +0000 UTC m=+0.021658109 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:21:55 compute-0 systemd[1]: Started libpod-conmon-660b87947cca8b3461e548ffe674be0aeee87bbf8056948cacfa63cd6b09243b.scope.
Nov 24 18:21:55 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c66c4ba95083a64565bd15fbc98703382bf9b55ae59f1a01b9fe99a0a5612fb2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c66c4ba95083a64565bd15fbc98703382bf9b55ae59f1a01b9fe99a0a5612fb2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c66c4ba95083a64565bd15fbc98703382bf9b55ae59f1a01b9fe99a0a5612fb2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c66c4ba95083a64565bd15fbc98703382bf9b55ae59f1a01b9fe99a0a5612fb2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c66c4ba95083a64565bd15fbc98703382bf9b55ae59f1a01b9fe99a0a5612fb2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:55 compute-0 podman[102257]: 2025-11-24 18:21:55.264305308 +0000 UTC m=+0.093576025 container create 188252963ccb36811691f49fa52efa99e8e45df6de4228a21d202be8b0c886c3 (image=quay.io/ceph/ceph:v18, name=adoring_leakey, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 24 18:21:55 compute-0 podman[102257]: 2025-11-24 18:21:55.19149841 +0000 UTC m=+0.020769117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:21:55 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Nov 24 18:21:55 compute-0 systemd[1]: Started libpod-conmon-188252963ccb36811691f49fa52efa99e8e45df6de4228a21d202be8b0c886c3.scope.
Nov 24 18:21:55 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73812ab7f05a343958358db0b9c5f54871acb45aca0bfbe3a21882006495b20f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73812ab7f05a343958358db0b9c5f54871acb45aca0bfbe3a21882006495b20f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:55 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.d scrub starts
Nov 24 18:21:55 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.d scrub ok
Nov 24 18:21:55 compute-0 podman[102241]: 2025-11-24 18:21:55.45850878 +0000 UTC m=+0.411724605 container init 660b87947cca8b3461e548ffe674be0aeee87bbf8056948cacfa63cd6b09243b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hofstadter, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:21:55 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Nov 24 18:21:55 compute-0 podman[102241]: 2025-11-24 18:21:55.46538254 +0000 UTC m=+0.418598315 container start 660b87947cca8b3461e548ffe674be0aeee87bbf8056948cacfa63cd6b09243b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hofstadter, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 18:21:55 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Nov 24 18:21:55 compute-0 podman[102241]: 2025-11-24 18:21:55.523438632 +0000 UTC m=+0.476654417 container attach 660b87947cca8b3461e548ffe674be0aeee87bbf8056948cacfa63cd6b09243b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hofstadter, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:21:55 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Nov 24 18:21:55 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2803639548' entity='client.rgw.rgw.compute-0.pecquu' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 24 18:21:55 compute-0 podman[102257]: 2025-11-24 18:21:55.716702481 +0000 UTC m=+0.545973198 container init 188252963ccb36811691f49fa52efa99e8e45df6de4228a21d202be8b0c886c3 (image=quay.io/ceph/ceph:v18, name=adoring_leakey, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:21:55 compute-0 podman[102257]: 2025-11-24 18:21:55.725436398 +0000 UTC m=+0.554707115 container start 188252963ccb36811691f49fa52efa99e8e45df6de4228a21d202be8b0c886c3 (image=quay.io/ceph/ceph:v18, name=adoring_leakey, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 24 18:21:55 compute-0 podman[102257]: 2025-11-24 18:21:55.736590735 +0000 UTC m=+0.565861462 container attach 188252963ccb36811691f49fa52efa99e8e45df6de4228a21d202be8b0c886c3 (image=quay.io/ceph/ceph:v18, name=adoring_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 18:21:55 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Nov 24 18:21:55 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Nov 24 18:21:55 compute-0 ceph-mon[74927]: from='client.14271 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 18:21:55 compute-0 ceph-mon[74927]: pgmap v111: 195 pgs: 195 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 12 op/s
Nov 24 18:21:55 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:21:55 compute-0 ceph-mon[74927]: osdmap e51: 3 total, 3 up, 3 in
Nov 24 18:21:55 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2803639548' entity='client.rgw.rgw.compute-0.pecquu' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 24 18:21:56 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 2.c deep-scrub starts
Nov 24 18:21:56 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 51 pg[10.0( empty local-lis/les=0/0 n=0 ec=51/51 lis/c=0/0 les/c/f=0/0/0 sis=51) [2] r=0 lpr=51 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:56 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 2.c deep-scrub ok
Nov 24 18:21:56 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 24 18:21:56 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3758805489' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 24 18:21:56 compute-0 adoring_leakey[102275]: 
Nov 24 18:21:56 compute-0 adoring_leakey[102275]: {"fsid":"e5ee928f-099b-569b-93c9-ecf025cbb50d","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":188,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":51,"num_osds":3,"num_up_osds":3,"osd_up_since":1764008452,"num_in_osds":3,"osd_in_since":1764008421,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":195}],"num_pgs":195,"num_pools":9,"num_objects":27,"data_bytes":463028,"bytes_used":84414464,"bytes_avail":64327512064,"bytes_total":64411926528,"read_bytes_sec":1023,"write_bytes_sec":4606,"read_op_per_sec":0,"write_op_per_sec":11},"fsmap":{"epoch":4,"id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.apnhwb","status":"up:active","gid":14267}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":5,"modified":"2025-11-24T18:21:54.483217+0000","services":{"mds":{"daemons":{"summary":"","cephfs.compute-0.apnhwb":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Nov 24 18:21:56 compute-0 systemd[1]: libpod-188252963ccb36811691f49fa52efa99e8e45df6de4228a21d202be8b0c886c3.scope: Deactivated successfully.
Nov 24 18:21:56 compute-0 podman[102257]: 2025-11-24 18:21:56.322716619 +0000 UTC m=+1.151987306 container died 188252963ccb36811691f49fa52efa99e8e45df6de4228a21d202be8b0c886c3 (image=quay.io/ceph/ceph:v18, name=adoring_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 18:21:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-73812ab7f05a343958358db0b9c5f54871acb45aca0bfbe3a21882006495b20f-merged.mount: Deactivated successfully.
Nov 24 18:21:56 compute-0 podman[102257]: 2025-11-24 18:21:56.363666405 +0000 UTC m=+1.192937092 container remove 188252963ccb36811691f49fa52efa99e8e45df6de4228a21d202be8b0c886c3 (image=quay.io/ceph/ceph:v18, name=adoring_leakey, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:21:56 compute-0 systemd[1]: libpod-conmon-188252963ccb36811691f49fa52efa99e8e45df6de4228a21d202be8b0c886c3.scope: Deactivated successfully.
Nov 24 18:21:56 compute-0 sudo[102233]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:56 compute-0 sweet_hofstadter[102263]: --> passed data devices: 0 physical, 3 LVM
Nov 24 18:21:56 compute-0 sweet_hofstadter[102263]: --> relative data size: 1.0
Nov 24 18:21:56 compute-0 sweet_hofstadter[102263]: --> All data devices are unavailable
Nov 24 18:21:56 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Nov 24 18:21:56 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2803639548' entity='client.rgw.rgw.compute-0.pecquu' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 24 18:21:56 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Nov 24 18:21:56 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Nov 24 18:21:56 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 52 pg[10.0( empty local-lis/les=51/52 n=0 ec=51/51 lis/c=0/0 les/c/f=0/0/0 sis=51) [2] r=0 lpr=51 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:56 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v114: 196 pgs: 1 unknown, 195 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 12 op/s
Nov 24 18:21:56 compute-0 systemd[1]: libpod-660b87947cca8b3461e548ffe674be0aeee87bbf8056948cacfa63cd6b09243b.scope: Deactivated successfully.
Nov 24 18:21:56 compute-0 podman[102337]: 2025-11-24 18:21:56.565746083 +0000 UTC m=+0.048690280 container died 660b87947cca8b3461e548ffe674be0aeee87bbf8056948cacfa63cd6b09243b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 18:21:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-c66c4ba95083a64565bd15fbc98703382bf9b55ae59f1a01b9fe99a0a5612fb2-merged.mount: Deactivated successfully.
Nov 24 18:21:56 compute-0 podman[102337]: 2025-11-24 18:21:56.624752738 +0000 UTC m=+0.107696925 container remove 660b87947cca8b3461e548ffe674be0aeee87bbf8056948cacfa63cd6b09243b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hofstadter, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:21:56 compute-0 systemd[1]: libpod-conmon-660b87947cca8b3461e548ffe674be0aeee87bbf8056948cacfa63cd6b09243b.scope: Deactivated successfully.
Nov 24 18:21:56 compute-0 sudo[102113]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:56 compute-0 sudo[102365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:56 compute-0 sudo[102365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:56 compute-0 sudo[102365]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:56 compute-0 sudo[102390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:21:56 compute-0 sudo[102390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:56 compute-0 sudo[102390]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:56 compute-0 sudo[102415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:56 compute-0 sudo[102415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:56 compute-0 sudo[102415]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:56 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Nov 24 18:21:56 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Nov 24 18:21:56 compute-0 sudo[102440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- lvm list --format json
Nov 24 18:21:56 compute-0 sudo[102440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:57 compute-0 ceph-mon[74927]: 3.d scrub starts
Nov 24 18:21:57 compute-0 ceph-mon[74927]: 3.d scrub ok
Nov 24 18:21:57 compute-0 ceph-mon[74927]: 6.5 scrub starts
Nov 24 18:21:57 compute-0 ceph-mon[74927]: 6.5 scrub ok
Nov 24 18:21:57 compute-0 ceph-mon[74927]: 2.c deep-scrub starts
Nov 24 18:21:57 compute-0 ceph-mon[74927]: 2.c deep-scrub ok
Nov 24 18:21:57 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3758805489' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 24 18:21:57 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2803639548' entity='client.rgw.rgw.compute-0.pecquu' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 24 18:21:57 compute-0 ceph-mon[74927]: osdmap e52: 3 total, 3 up, 3 in
Nov 24 18:21:57 compute-0 ceph-mon[74927]: 4.15 scrub starts
Nov 24 18:21:57 compute-0 ceph-mon[74927]: 4.15 scrub ok
Nov 24 18:21:57 compute-0 sudo[102502]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bytwwsgzxeojxxzcwaphrxpcsgirsnoo ; /usr/bin/python3'
Nov 24 18:21:57 compute-0 sudo[102502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:21:57 compute-0 python3[102509]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:21:57 compute-0 podman[102530]: 2025-11-24 18:21:57.334226214 +0000 UTC m=+0.040576018 container create 299071e2f5701828bbe05aa6e073481195e99937a3a999aa0f80aa421f76cc4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 18:21:57 compute-0 systemd[1]: Started libpod-conmon-299071e2f5701828bbe05aa6e073481195e99937a3a999aa0f80aa421f76cc4b.scope.
Nov 24 18:21:57 compute-0 podman[102542]: 2025-11-24 18:21:57.370494465 +0000 UTC m=+0.042447965 container create fc703ed1bb1c5b511eb5a508be59b532653e01f3c79fc5f4ec79e19506d9d5b9 (image=quay.io/ceph/ceph:v18, name=vigilant_edison, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 24 18:21:57 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:57 compute-0 systemd[1]: Started libpod-conmon-fc703ed1bb1c5b511eb5a508be59b532653e01f3c79fc5f4ec79e19506d9d5b9.scope.
Nov 24 18:21:57 compute-0 podman[102530]: 2025-11-24 18:21:57.411237936 +0000 UTC m=+0.117587780 container init 299071e2f5701828bbe05aa6e073481195e99937a3a999aa0f80aa421f76cc4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mirzakhani, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:21:57 compute-0 podman[102530]: 2025-11-24 18:21:57.316389461 +0000 UTC m=+0.022739285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:21:57 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Nov 24 18:21:57 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:57 compute-0 podman[102530]: 2025-11-24 18:21:57.421912301 +0000 UTC m=+0.128262105 container start 299071e2f5701828bbe05aa6e073481195e99937a3a999aa0f80aa421f76cc4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 18:21:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f497881b2e35bef1de4bf4de0a56e9dd15fbbd4bc60bf74ffeb6e968bcc41991/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f497881b2e35bef1de4bf4de0a56e9dd15fbbd4bc60bf74ffeb6e968bcc41991/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:57 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Nov 24 18:21:57 compute-0 podman[102530]: 2025-11-24 18:21:57.425960902 +0000 UTC m=+0.132310736 container attach 299071e2f5701828bbe05aa6e073481195e99937a3a999aa0f80aa421f76cc4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 24 18:21:57 compute-0 sad_mirzakhani[102559]: 167 167
Nov 24 18:21:57 compute-0 systemd[1]: libpod-299071e2f5701828bbe05aa6e073481195e99937a3a999aa0f80aa421f76cc4b.scope: Deactivated successfully.
Nov 24 18:21:57 compute-0 podman[102542]: 2025-11-24 18:21:57.436512274 +0000 UTC m=+0.108465774 container init fc703ed1bb1c5b511eb5a508be59b532653e01f3c79fc5f4ec79e19506d9d5b9 (image=quay.io/ceph/ceph:v18, name=vigilant_edison, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:21:57 compute-0 podman[102530]: 2025-11-24 18:21:57.441128349 +0000 UTC m=+0.147478153 container died 299071e2f5701828bbe05aa6e073481195e99937a3a999aa0f80aa421f76cc4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mirzakhani, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 24 18:21:57 compute-0 podman[102542]: 2025-11-24 18:21:57.4420086 +0000 UTC m=+0.113962100 container start fc703ed1bb1c5b511eb5a508be59b532653e01f3c79fc5f4ec79e19506d9d5b9 (image=quay.io/ceph/ceph:v18, name=vigilant_edison, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:21:57 compute-0 podman[102542]: 2025-11-24 18:21:57.354224391 +0000 UTC m=+0.026177911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:21:57 compute-0 podman[102542]: 2025-11-24 18:21:57.452671435 +0000 UTC m=+0.124624965 container attach fc703ed1bb1c5b511eb5a508be59b532653e01f3c79fc5f4ec79e19506d9d5b9 (image=quay.io/ceph/ceph:v18, name=vigilant_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 18:21:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-c47f54ea5fcd053f91b45d9534804e811648e20ece17d79909d8e599009b80c5-merged.mount: Deactivated successfully.
Nov 24 18:21:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Nov 24 18:21:57 compute-0 podman[102530]: 2025-11-24 18:21:57.474887047 +0000 UTC m=+0.181236851 container remove 299071e2f5701828bbe05aa6e073481195e99937a3a999aa0f80aa421f76cc4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mirzakhani, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:21:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Nov 24 18:21:57 compute-0 systemd[1]: libpod-conmon-299071e2f5701828bbe05aa6e073481195e99937a3a999aa0f80aa421f76cc4b.scope: Deactivated successfully.
Nov 24 18:21:57 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Nov 24 18:21:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:21:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Nov 24 18:21:57 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2409275648' entity='client.rgw.rgw.compute-0.pecquu' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 24 18:21:57 compute-0 podman[102589]: 2025-11-24 18:21:57.630200603 +0000 UTC m=+0.023013042 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:21:58 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 24 18:21:58 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1538751053' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 18:21:58 compute-0 podman[102589]: 2025-11-24 18:21:58.384692828 +0000 UTC m=+0.777505277 container create 2f3951d4b7eef42ae8bbf5155d1bdee7b93b1ddd4187151e9ec1982071b3ecc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_vaughan, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:21:58 compute-0 ceph-mon[74927]: pgmap v114: 196 pgs: 1 unknown, 195 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 12 op/s
Nov 24 18:21:58 compute-0 ceph-mon[74927]: 3.10 scrub starts
Nov 24 18:21:58 compute-0 ceph-mon[74927]: osdmap e53: 3 total, 3 up, 3 in
Nov 24 18:21:58 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2409275648' entity='client.rgw.rgw.compute-0.pecquu' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 24 18:21:58 compute-0 vigilant_edison[102564]: 
Nov 24 18:21:58 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 53 pg[11.0( empty local-lis/les=0/0 n=0 ec=53/53 lis/c=0/0 les/c/f=0/0/0 sis=53) [1] r=0 lpr=53 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:21:58 compute-0 vigilant_edison[102564]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.pecquu","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Nov 24 18:21:58 compute-0 systemd[1]: libpod-fc703ed1bb1c5b511eb5a508be59b532653e01f3c79fc5f4ec79e19506d9d5b9.scope: Deactivated successfully.
Nov 24 18:21:58 compute-0 podman[102542]: 2025-11-24 18:21:58.421853121 +0000 UTC m=+1.093806621 container died fc703ed1bb1c5b511eb5a508be59b532653e01f3c79fc5f4ec79e19506d9d5b9 (image=quay.io/ceph/ceph:v18, name=vigilant_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:21:58 compute-0 systemd[1]: Started libpod-conmon-2f3951d4b7eef42ae8bbf5155d1bdee7b93b1ddd4187151e9ec1982071b3ecc4.scope.
Nov 24 18:21:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-f497881b2e35bef1de4bf4de0a56e9dd15fbbd4bc60bf74ffeb6e968bcc41991-merged.mount: Deactivated successfully.
Nov 24 18:21:58 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98dce4562069d5590fcbbcfbd9351416220843b176c85e16102934f8844f6929/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98dce4562069d5590fcbbcfbd9351416220843b176c85e16102934f8844f6929/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98dce4562069d5590fcbbcfbd9351416220843b176c85e16102934f8844f6929/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98dce4562069d5590fcbbcfbd9351416220843b176c85e16102934f8844f6929/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:58 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Nov 24 18:21:58 compute-0 podman[102542]: 2025-11-24 18:21:58.479757438 +0000 UTC m=+1.151710938 container remove fc703ed1bb1c5b511eb5a508be59b532653e01f3c79fc5f4ec79e19506d9d5b9 (image=quay.io/ceph/ceph:v18, name=vigilant_edison, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 18:21:58 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v116: 197 pgs: 1 unknown, 196 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 988 B/s rd, 1.2 KiB/s wr, 4 op/s
Nov 24 18:21:58 compute-0 systemd[1]: libpod-conmon-fc703ed1bb1c5b511eb5a508be59b532653e01f3c79fc5f4ec79e19506d9d5b9.scope: Deactivated successfully.
Nov 24 18:21:58 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2409275648' entity='client.rgw.rgw.compute-0.pecquu' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 24 18:21:58 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Nov 24 18:21:58 compute-0 podman[102589]: 2025-11-24 18:21:58.496304099 +0000 UTC m=+0.889116538 container init 2f3951d4b7eef42ae8bbf5155d1bdee7b93b1ddd4187151e9ec1982071b3ecc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:21:58 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Nov 24 18:21:58 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Nov 24 18:21:58 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2409275648' entity='client.rgw.rgw.compute-0.pecquu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 24 18:21:58 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 54 pg[11.0( empty local-lis/les=53/54 n=0 ec=53/53 lis/c=0/0 les/c/f=0/0/0 sis=53) [1] r=0 lpr=53 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:21:58 compute-0 podman[102589]: 2025-11-24 18:21:58.511990639 +0000 UTC m=+0.904803058 container start 2f3951d4b7eef42ae8bbf5155d1bdee7b93b1ddd4187151e9ec1982071b3ecc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_vaughan, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 24 18:21:58 compute-0 sudo[102502]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:58 compute-0 podman[102589]: 2025-11-24 18:21:58.519648749 +0000 UTC m=+0.912461178 container attach 2f3951d4b7eef42ae8bbf5155d1bdee7b93b1ddd4187151e9ec1982071b3ecc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]: {
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:     "0": [
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:         {
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:             "devices": [
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:                 "/dev/loop3"
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:             ],
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:             "lv_name": "ceph_lv0",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:             "lv_size": "21470642176",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:             "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:             "name": "ceph_lv0",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:             "tags": {
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:                 "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:                 "ceph.cluster_name": "ceph",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:                 "ceph.crush_device_class": "",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:                 "ceph.encrypted": "0",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:                 "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:                 "ceph.osd_id": "0",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:                 "ceph.type": "block",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:                 "ceph.vdo": "0"
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:             },
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:             "type": "block",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:             "vg_name": "ceph_vg0"
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:         }
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:     ],
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:     "1": [
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:         {
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:             "devices": [
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:                 "/dev/loop4"
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:             ],
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:             "lv_name": "ceph_lv1",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:             "lv_size": "21470642176",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:             "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:             "name": "ceph_lv1",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:             "tags": {
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:                 "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:                 "ceph.cluster_name": "ceph",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:                 "ceph.crush_device_class": "",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:                 "ceph.encrypted": "0",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:                 "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:                 "ceph.osd_id": "1",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:                 "ceph.type": "block",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:                 "ceph.vdo": "0"
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:             },
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:             "type": "block",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:             "vg_name": "ceph_vg1"
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:         }
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:     ],
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:     "2": [
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:         {
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:             "devices": [
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:                 "/dev/loop5"
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:             ],
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:             "lv_name": "ceph_lv2",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:             "lv_size": "21470642176",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:             "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:             "name": "ceph_lv2",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:             "tags": {
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:                 "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:                 "ceph.cluster_name": "ceph",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:                 "ceph.crush_device_class": "",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:                 "ceph.encrypted": "0",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:                 "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:                 "ceph.osd_id": "2",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:                 "ceph.type": "block",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:                 "ceph.vdo": "0"
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:             },
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:             "type": "block",
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:             "vg_name": "ceph_vg2"
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:         }
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]:     ]
Nov 24 18:21:59 compute-0 elegant_vaughan[102633]: }
Nov 24 18:21:59 compute-0 systemd[1]: libpod-2f3951d4b7eef42ae8bbf5155d1bdee7b93b1ddd4187151e9ec1982071b3ecc4.scope: Deactivated successfully.
Nov 24 18:21:59 compute-0 podman[102589]: 2025-11-24 18:21:59.270798611 +0000 UTC m=+1.663611040 container died 2f3951d4b7eef42ae8bbf5155d1bdee7b93b1ddd4187151e9ec1982071b3ecc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:21:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-98dce4562069d5590fcbbcfbd9351416220843b176c85e16102934f8844f6929-merged.mount: Deactivated successfully.
Nov 24 18:21:59 compute-0 podman[102589]: 2025-11-24 18:21:59.339536467 +0000 UTC m=+1.732348886 container remove 2f3951d4b7eef42ae8bbf5155d1bdee7b93b1ddd4187151e9ec1982071b3ecc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_vaughan, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 24 18:21:59 compute-0 systemd[1]: libpod-conmon-2f3951d4b7eef42ae8bbf5155d1bdee7b93b1ddd4187151e9ec1982071b3ecc4.scope: Deactivated successfully.
Nov 24 18:21:59 compute-0 ceph-mon[74927]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 18:21:59 compute-0 ceph-mon[74927]: 3.10 scrub ok
Nov 24 18:21:59 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1538751053' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 18:21:59 compute-0 ceph-mon[74927]: pgmap v116: 197 pgs: 1 unknown, 196 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 988 B/s rd, 1.2 KiB/s wr, 4 op/s
Nov 24 18:21:59 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2409275648' entity='client.rgw.rgw.compute-0.pecquu' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 24 18:21:59 compute-0 ceph-mon[74927]: osdmap e54: 3 total, 3 up, 3 in
Nov 24 18:21:59 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2409275648' entity='client.rgw.rgw.compute-0.pecquu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 24 18:21:59 compute-0 sudo[102440]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:59 compute-0 sudo[102684]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhrtbihhwniznxavuixhcwrbcomrzqjp ; /usr/bin/python3'
Nov 24 18:21:59 compute-0 sudo[102684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:21:59 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Nov 24 18:21:59 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Nov 24 18:21:59 compute-0 sudo[102686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:59 compute-0 sudo[102686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:59 compute-0 sudo[102686]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:59 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Nov 24 18:21:59 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2409275648' entity='client.rgw.rgw.compute-0.pecquu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 24 18:21:59 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Nov 24 18:21:59 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Nov 24 18:21:59 compute-0 sudo[102712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:21:59 compute-0 sudo[102712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:59 compute-0 python3[102690]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:21:59 compute-0 sudo[102712]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:59 compute-0 radosgw[100923]: LDAP not started since no server URIs were provided in the configuration.
Nov 24 18:21:59 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-rgw-rgw-compute-0-pecquu[100919]: 2025-11-24T18:21:59.655+0000 7fbb4adbf940 -1 LDAP not started since no server URIs were provided in the configuration.
Nov 24 18:21:59 compute-0 radosgw[100923]: framework: beast
Nov 24 18:21:59 compute-0 radosgw[100923]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Nov 24 18:21:59 compute-0 radosgw[100923]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Nov 24 18:21:59 compute-0 sudo[102738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:21:59 compute-0 sudo[102738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:59 compute-0 sudo[102738]: pam_unix(sudo:session): session closed for user root
Nov 24 18:21:59 compute-0 radosgw[100923]: starting handler: beast
Nov 24 18:21:59 compute-0 podman[102737]: 2025-11-24 18:21:59.696508151 +0000 UTC m=+0.080539041 container create 1d3cce47d84e6281963fac1a33402ea9fd15932628e93b76e701ff4699cd5354 (image=quay.io/ceph/ceph:v18, name=awesome_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 18:21:59 compute-0 radosgw[100923]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 18:21:59 compute-0 systemd[1]: Started libpod-conmon-1d3cce47d84e6281963fac1a33402ea9fd15932628e93b76e701ff4699cd5354.scope.
Nov 24 18:21:59 compute-0 radosgw[100923]: mgrc service_daemon_register rgw.14275 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.pecquu,kernel_description=#1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025,kernel_version=5.14.0-639.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=397255ee-4d69-4fa3-899d-5bd80ba5189e,zone_name=default,zonegroup_id=d82e3822-fde1-4c19-b950-8e22988d5e44,zonegroup_name=default}
Nov 24 18:21:59 compute-0 sudo[102802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- raw list --format json
Nov 24 18:21:59 compute-0 sudo[102802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:21:59 compute-0 podman[102737]: 2025-11-24 18:21:59.666400034 +0000 UTC m=+0.050430954 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:21:59 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:21:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fc6e627d6ee8efa5c4c3ab9ae7b8b325aed826e4aa1daef8191402bb5015dd5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fc6e627d6ee8efa5c4c3ab9ae7b8b325aed826e4aa1daef8191402bb5015dd5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:21:59 compute-0 podman[102737]: 2025-11-24 18:21:59.783393119 +0000 UTC m=+0.167423989 container init 1d3cce47d84e6281963fac1a33402ea9fd15932628e93b76e701ff4699cd5354 (image=quay.io/ceph/ceph:v18, name=awesome_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 18:21:59 compute-0 podman[102737]: 2025-11-24 18:21:59.793372487 +0000 UTC m=+0.177403347 container start 1d3cce47d84e6281963fac1a33402ea9fd15932628e93b76e701ff4699cd5354 (image=quay.io/ceph/ceph:v18, name=awesome_beaver, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:21:59 compute-0 podman[102737]: 2025-11-24 18:21:59.798675788 +0000 UTC m=+0.182706638 container attach 1d3cce47d84e6281963fac1a33402ea9fd15932628e93b76e701ff4699cd5354 (image=quay.io/ceph/ceph:v18, name=awesome_beaver, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 18:22:00 compute-0 podman[103389]: 2025-11-24 18:22:00.08065388 +0000 UTC m=+0.058793121 container create 21dacaeabc5631810955a809d28a044549d0d667e16608595b2ebbeb49883404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_northcutt, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 18:22:00 compute-0 systemd[1]: Started libpod-conmon-21dacaeabc5631810955a809d28a044549d0d667e16608595b2ebbeb49883404.scope.
Nov 24 18:22:00 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:22:00 compute-0 podman[103389]: 2025-11-24 18:22:00.057265609 +0000 UTC m=+0.035404880 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:22:00 compute-0 podman[103389]: 2025-11-24 18:22:00.151569341 +0000 UTC m=+0.129708592 container init 21dacaeabc5631810955a809d28a044549d0d667e16608595b2ebbeb49883404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_northcutt, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 24 18:22:00 compute-0 podman[103389]: 2025-11-24 18:22:00.159081037 +0000 UTC m=+0.137220278 container start 21dacaeabc5631810955a809d28a044549d0d667e16608595b2ebbeb49883404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:22:00 compute-0 wonderful_northcutt[103406]: 167 167
Nov 24 18:22:00 compute-0 systemd[1]: libpod-21dacaeabc5631810955a809d28a044549d0d667e16608595b2ebbeb49883404.scope: Deactivated successfully.
Nov 24 18:22:00 compute-0 podman[103389]: 2025-11-24 18:22:00.164806259 +0000 UTC m=+0.142945530 container attach 21dacaeabc5631810955a809d28a044549d0d667e16608595b2ebbeb49883404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_northcutt, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:22:00 compute-0 conmon[103406]: conmon 21dacaeabc5631810955 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-21dacaeabc5631810955a809d28a044549d0d667e16608595b2ebbeb49883404.scope/container/memory.events
Nov 24 18:22:00 compute-0 podman[103389]: 2025-11-24 18:22:00.166765448 +0000 UTC m=+0.144904689 container died 21dacaeabc5631810955a809d28a044549d0d667e16608595b2ebbeb49883404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:22:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-983b8595d8b201846236fd34cb3baf4ce0c676e94bab630eea14316dc2998921-merged.mount: Deactivated successfully.
Nov 24 18:22:00 compute-0 podman[103389]: 2025-11-24 18:22:00.259807368 +0000 UTC m=+0.237946609 container remove 21dacaeabc5631810955a809d28a044549d0d667e16608595b2ebbeb49883404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:22:00 compute-0 systemd[1]: libpod-conmon-21dacaeabc5631810955a809d28a044549d0d667e16608595b2ebbeb49883404.scope: Deactivated successfully.
Nov 24 18:22:00 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Nov 24 18:22:00 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3379749381' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 24 18:22:00 compute-0 awesome_beaver[103340]: mimic
Nov 24 18:22:00 compute-0 ceph-mon[74927]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 18:22:00 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2409275648' entity='client.rgw.rgw.compute-0.pecquu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 24 18:22:00 compute-0 ceph-mon[74927]: osdmap e55: 3 total, 3 up, 3 in
Nov 24 18:22:00 compute-0 podman[102737]: 2025-11-24 18:22:00.419140514 +0000 UTC m=+0.803171364 container died 1d3cce47d84e6281963fac1a33402ea9fd15932628e93b76e701ff4699cd5354 (image=quay.io/ceph/ceph:v18, name=awesome_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:22:00 compute-0 systemd[1]: libpod-1d3cce47d84e6281963fac1a33402ea9fd15932628e93b76e701ff4699cd5354.scope: Deactivated successfully.
Nov 24 18:22:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-7fc6e627d6ee8efa5c4c3ab9ae7b8b325aed826e4aa1daef8191402bb5015dd5-merged.mount: Deactivated successfully.
Nov 24 18:22:00 compute-0 podman[102737]: 2025-11-24 18:22:00.472979901 +0000 UTC m=+0.857010751 container remove 1d3cce47d84e6281963fac1a33402ea9fd15932628e93b76e701ff4699cd5354 (image=quay.io/ceph/ceph:v18, name=awesome_beaver, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:22:00 compute-0 systemd[1]: libpod-conmon-1d3cce47d84e6281963fac1a33402ea9fd15932628e93b76e701ff4699cd5354.scope: Deactivated successfully.
Nov 24 18:22:00 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v119: 197 pgs: 197 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 510 B/s wr, 1 op/s
Nov 24 18:22:00 compute-0 sudo[102684]: pam_unix(sudo:session): session closed for user root
Nov 24 18:22:00 compute-0 podman[103448]: 2025-11-24 18:22:00.499421677 +0000 UTC m=+0.090062737 container create 6e467a9a41b424baa60a3fe58fd96eea300b4f362b2a92104d3e5b32aa9cdac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 18:22:00 compute-0 podman[103448]: 2025-11-24 18:22:00.440414852 +0000 UTC m=+0.031055902 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:22:00 compute-0 systemd[1]: Started libpod-conmon-6e467a9a41b424baa60a3fe58fd96eea300b4f362b2a92104d3e5b32aa9cdac0.scope.
Nov 24 18:22:00 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:22:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee1ace7c24aff56fc21dcdd268b5464663a9a2978f9dbcfb256b713438dc2aac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:22:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee1ace7c24aff56fc21dcdd268b5464663a9a2978f9dbcfb256b713438dc2aac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:22:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee1ace7c24aff56fc21dcdd268b5464663a9a2978f9dbcfb256b713438dc2aac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:22:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee1ace7c24aff56fc21dcdd268b5464663a9a2978f9dbcfb256b713438dc2aac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:22:00 compute-0 podman[103448]: 2025-11-24 18:22:00.612245409 +0000 UTC m=+0.202886479 container init 6e467a9a41b424baa60a3fe58fd96eea300b4f362b2a92104d3e5b32aa9cdac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 18:22:00 compute-0 podman[103448]: 2025-11-24 18:22:00.620609436 +0000 UTC m=+0.211250486 container start 6e467a9a41b424baa60a3fe58fd96eea300b4f362b2a92104d3e5b32aa9cdac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:22:00 compute-0 podman[103448]: 2025-11-24 18:22:00.623288223 +0000 UTC m=+0.213929273 container attach 6e467a9a41b424baa60a3fe58fd96eea300b4f362b2a92104d3e5b32aa9cdac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mirzakhani, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 18:22:01 compute-0 sudo[103505]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azpguevwkxrzwbthvmidevyejpmrqfbr ; /usr/bin/python3'
Nov 24 18:22:01 compute-0 sudo[103505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:22:01 compute-0 ceph-mon[74927]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 24 18:22:01 compute-0 ceph-mon[74927]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 24 18:22:01 compute-0 ceph-mon[74927]: 3.13 scrub starts
Nov 24 18:22:01 compute-0 ceph-mon[74927]: 3.13 scrub ok
Nov 24 18:22:01 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3379749381' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 24 18:22:01 compute-0 ceph-mon[74927]: pgmap v119: 197 pgs: 197 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 510 B/s wr, 1 op/s
Nov 24 18:22:01 compute-0 python3[103509]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:22:01 compute-0 podman[103524]: 2025-11-24 18:22:01.470408407 +0000 UTC m=+0.036747073 container create 8e02ae2236d81cf5d059eed78f6de0a56027e5bfcdda44e561cf8228351791df (image=quay.io/ceph/ceph:v18, name=zealous_shaw, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:22:01 compute-0 systemd[1]: Started libpod-conmon-8e02ae2236d81cf5d059eed78f6de0a56027e5bfcdda44e561cf8228351791df.scope.
Nov 24 18:22:01 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:22:01 compute-0 admiring_mirzakhani[103477]: {
Nov 24 18:22:01 compute-0 admiring_mirzakhani[103477]:     "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 18:22:01 compute-0 admiring_mirzakhani[103477]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:22:01 compute-0 admiring_mirzakhani[103477]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 18:22:01 compute-0 admiring_mirzakhani[103477]:         "osd_id": 0,
Nov 24 18:22:01 compute-0 admiring_mirzakhani[103477]:         "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:22:01 compute-0 admiring_mirzakhani[103477]:         "type": "bluestore"
Nov 24 18:22:01 compute-0 admiring_mirzakhani[103477]:     },
Nov 24 18:22:01 compute-0 admiring_mirzakhani[103477]:     "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 18:22:01 compute-0 admiring_mirzakhani[103477]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:22:01 compute-0 admiring_mirzakhani[103477]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 18:22:01 compute-0 admiring_mirzakhani[103477]:         "osd_id": 1,
Nov 24 18:22:01 compute-0 admiring_mirzakhani[103477]:         "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:22:01 compute-0 admiring_mirzakhani[103477]:         "type": "bluestore"
Nov 24 18:22:01 compute-0 admiring_mirzakhani[103477]:     },
Nov 24 18:22:01 compute-0 admiring_mirzakhani[103477]:     "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 18:22:01 compute-0 admiring_mirzakhani[103477]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:22:01 compute-0 admiring_mirzakhani[103477]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 18:22:01 compute-0 admiring_mirzakhani[103477]:         "osd_id": 2,
Nov 24 18:22:01 compute-0 admiring_mirzakhani[103477]:         "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:22:01 compute-0 admiring_mirzakhani[103477]:         "type": "bluestore"
Nov 24 18:22:01 compute-0 admiring_mirzakhani[103477]:     }
Nov 24 18:22:01 compute-0 admiring_mirzakhani[103477]: }
Nov 24 18:22:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/196c9bd2700d9801eabe7f3578a0e9e8d6d30430dcec4e7e261fd7509ac18aa4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:22:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/196c9bd2700d9801eabe7f3578a0e9e8d6d30430dcec4e7e261fd7509ac18aa4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:22:01 compute-0 podman[103524]: 2025-11-24 18:22:01.546870176 +0000 UTC m=+0.113209122 container init 8e02ae2236d81cf5d059eed78f6de0a56027e5bfcdda44e561cf8228351791df (image=quay.io/ceph/ceph:v18, name=zealous_shaw, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:22:01 compute-0 podman[103524]: 2025-11-24 18:22:01.454689367 +0000 UTC m=+0.021028063 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:22:01 compute-0 podman[103524]: 2025-11-24 18:22:01.55225261 +0000 UTC m=+0.118591276 container start 8e02ae2236d81cf5d059eed78f6de0a56027e5bfcdda44e561cf8228351791df (image=quay.io/ceph/ceph:v18, name=zealous_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 24 18:22:01 compute-0 podman[103524]: 2025-11-24 18:22:01.555592593 +0000 UTC m=+0.121931289 container attach 8e02ae2236d81cf5d059eed78f6de0a56027e5bfcdda44e561cf8228351791df (image=quay.io/ceph/ceph:v18, name=zealous_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:22:01 compute-0 systemd[1]: libpod-6e467a9a41b424baa60a3fe58fd96eea300b4f362b2a92104d3e5b32aa9cdac0.scope: Deactivated successfully.
Nov 24 18:22:01 compute-0 podman[103448]: 2025-11-24 18:22:01.570549634 +0000 UTC m=+1.161190684 container died 6e467a9a41b424baa60a3fe58fd96eea300b4f362b2a92104d3e5b32aa9cdac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 24 18:22:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee1ace7c24aff56fc21dcdd268b5464663a9a2978f9dbcfb256b713438dc2aac-merged.mount: Deactivated successfully.
Nov 24 18:22:01 compute-0 podman[103448]: 2025-11-24 18:22:01.619421507 +0000 UTC m=+1.210062557 container remove 6e467a9a41b424baa60a3fe58fd96eea300b4f362b2a92104d3e5b32aa9cdac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mirzakhani, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 18:22:01 compute-0 systemd[1]: libpod-conmon-6e467a9a41b424baa60a3fe58fd96eea300b4f362b2a92104d3e5b32aa9cdac0.scope: Deactivated successfully.
Nov 24 18:22:01 compute-0 sudo[102802]: pam_unix(sudo:session): session closed for user root
Nov 24 18:22:01 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:22:01 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:22:01 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:22:01 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:22:01 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 24220f0a-b208-4327-bf21-5e2fd9de5ee2 does not exist
Nov 24 18:22:01 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 705d2717-c736-4aff-994c-68f975b5dab1 does not exist
Nov 24 18:22:01 compute-0 sudo[103569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:22:01 compute-0 sudo[103569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:22:01 compute-0 sudo[103569]: pam_unix(sudo:session): session closed for user root
Nov 24 18:22:01 compute-0 sudo[103594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:22:01 compute-0 sudo[103594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:22:01 compute-0 sudo[103594]: pam_unix(sudo:session): session closed for user root
Nov 24 18:22:01 compute-0 sudo[103619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:22:01 compute-0 sudo[103619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:22:01 compute-0 sudo[103619]: pam_unix(sudo:session): session closed for user root
Nov 24 18:22:01 compute-0 sudo[103644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:22:01 compute-0 sudo[103644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:22:01 compute-0 sudo[103644]: pam_unix(sudo:session): session closed for user root
Nov 24 18:22:01 compute-0 sudo[103688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:22:01 compute-0 sudo[103688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:22:01 compute-0 sudo[103688]: pam_unix(sudo:session): session closed for user root
Nov 24 18:22:01 compute-0 sudo[103713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 24 18:22:01 compute-0 sudo[103713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:22:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Nov 24 18:22:02 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2248733845' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 24 18:22:02 compute-0 zealous_shaw[103549]: 
Nov 24 18:22:02 compute-0 zealous_shaw[103549]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"rgw":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":7}}
Nov 24 18:22:02 compute-0 systemd[1]: libpod-8e02ae2236d81cf5d059eed78f6de0a56027e5bfcdda44e561cf8228351791df.scope: Deactivated successfully.
Nov 24 18:22:02 compute-0 podman[103524]: 2025-11-24 18:22:02.166631505 +0000 UTC m=+0.732970171 container died 8e02ae2236d81cf5d059eed78f6de0a56027e5bfcdda44e561cf8228351791df (image=quay.io/ceph/ceph:v18, name=zealous_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:22:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-196c9bd2700d9801eabe7f3578a0e9e8d6d30430dcec4e7e261fd7509ac18aa4-merged.mount: Deactivated successfully.
Nov 24 18:22:02 compute-0 podman[103524]: 2025-11-24 18:22:02.212852513 +0000 UTC m=+0.779191179 container remove 8e02ae2236d81cf5d059eed78f6de0a56027e5bfcdda44e561cf8228351791df (image=quay.io/ceph/ceph:v18, name=zealous_shaw, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:22:02 compute-0 systemd[1]: libpod-conmon-8e02ae2236d81cf5d059eed78f6de0a56027e5bfcdda44e561cf8228351791df.scope: Deactivated successfully.
Nov 24 18:22:02 compute-0 sudo[103505]: pam_unix(sudo:session): session closed for user root
Nov 24 18:22:02 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 2.e deep-scrub starts
Nov 24 18:22:02 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 2.e deep-scrub ok
Nov 24 18:22:02 compute-0 podman[103821]: 2025-11-24 18:22:02.385846918 +0000 UTC m=+0.045053649 container exec 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:22:02 compute-0 ceph-mon[74927]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 24 18:22:02 compute-0 ceph-mon[74927]: Cluster is now healthy
Nov 24 18:22:02 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:22:02 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:22:02 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2248733845' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 24 18:22:02 compute-0 ceph-mon[74927]: 2.e deep-scrub starts
Nov 24 18:22:02 compute-0 ceph-mon[74927]: 2.e deep-scrub ok
Nov 24 18:22:02 compute-0 podman[103821]: 2025-11-24 18:22:02.471185697 +0000 UTC m=+0.130392408 container exec_died 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:22:02 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v120: 197 pgs: 197 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 341 B/s wr, 1 op/s
Nov 24 18:22:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:22:02 compute-0 sudo[103713]: pam_unix(sudo:session): session closed for user root
Nov 24 18:22:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:22:02 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:22:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:22:02 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:22:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:22:02 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:22:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:22:02 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:22:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:22:02 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:22:02 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 319dabf3-d8dc-4b35-b62d-57c000554cb7 does not exist
Nov 24 18:22:02 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 69b7fd25-1614-481b-a45c-ef5394feb223 does not exist
Nov 24 18:22:02 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev fa82b585-d31d-4323-b445-55e90ea665da does not exist
Nov 24 18:22:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 18:22:02 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:22:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 18:22:02 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:22:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:22:02 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:22:03 compute-0 sudo[103979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:22:03 compute-0 sudo[103979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:22:03 compute-0 sudo[103979]: pam_unix(sudo:session): session closed for user root
Nov 24 18:22:03 compute-0 sudo[104004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:22:03 compute-0 sudo[104004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:22:03 compute-0 sudo[104004]: pam_unix(sudo:session): session closed for user root
Nov 24 18:22:03 compute-0 sudo[104029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:22:03 compute-0 sudo[104029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:22:03 compute-0 sudo[104029]: pam_unix(sudo:session): session closed for user root
Nov 24 18:22:03 compute-0 sudo[104054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 18:22:03 compute-0 sudo[104054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:22:03 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Nov 24 18:22:03 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Nov 24 18:22:03 compute-0 podman[104120]: 2025-11-24 18:22:03.53410158 +0000 UTC m=+0.039455260 container create 11bdf3e6e8af43f825819675f18003f83216311b54ec8b193356b754df08b431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_darwin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 24 18:22:03 compute-0 systemd[1]: Started libpod-conmon-11bdf3e6e8af43f825819675f18003f83216311b54ec8b193356b754df08b431.scope.
Nov 24 18:22:03 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:22:03 compute-0 podman[104120]: 2025-11-24 18:22:03.582757729 +0000 UTC m=+0.088111379 container init 11bdf3e6e8af43f825819675f18003f83216311b54ec8b193356b754df08b431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 24 18:22:03 compute-0 podman[104120]: 2025-11-24 18:22:03.59287274 +0000 UTC m=+0.098226380 container start 11bdf3e6e8af43f825819675f18003f83216311b54ec8b193356b754df08b431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 24 18:22:03 compute-0 practical_darwin[104137]: 167 167
Nov 24 18:22:03 compute-0 systemd[1]: libpod-11bdf3e6e8af43f825819675f18003f83216311b54ec8b193356b754df08b431.scope: Deactivated successfully.
Nov 24 18:22:03 compute-0 podman[104120]: 2025-11-24 18:22:03.59612203 +0000 UTC m=+0.101475670 container attach 11bdf3e6e8af43f825819675f18003f83216311b54ec8b193356b754df08b431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_darwin, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 24 18:22:03 compute-0 podman[104120]: 2025-11-24 18:22:03.596674544 +0000 UTC m=+0.102028184 container died 11bdf3e6e8af43f825819675f18003f83216311b54ec8b193356b754df08b431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:22:03 compute-0 podman[104120]: 2025-11-24 18:22:03.516701648 +0000 UTC m=+0.022055318 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:22:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-7789a01009d44a20a807c79b85c4fa7713789aedc56b721818b64bebd2c1f055-merged.mount: Deactivated successfully.
Nov 24 18:22:03 compute-0 podman[104120]: 2025-11-24 18:22:03.628788051 +0000 UTC m=+0.134141691 container remove 11bdf3e6e8af43f825819675f18003f83216311b54ec8b193356b754df08b431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 18:22:03 compute-0 systemd[1]: libpod-conmon-11bdf3e6e8af43f825819675f18003f83216311b54ec8b193356b754df08b431.scope: Deactivated successfully.
Nov 24 18:22:03 compute-0 podman[104162]: 2025-11-24 18:22:03.781742069 +0000 UTC m=+0.037229055 container create 491ec1db538cd1e8468b1c36977b87634a67e67b1f05affc23eb640a8726329e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 24 18:22:03 compute-0 systemd[1]: Started libpod-conmon-491ec1db538cd1e8468b1c36977b87634a67e67b1f05affc23eb640a8726329e.scope.
Nov 24 18:22:03 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:22:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/891197beb9b38e30f44e080c1a17f561982d626e64d097e5c97b17bc807cffc1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:22:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/891197beb9b38e30f44e080c1a17f561982d626e64d097e5c97b17bc807cffc1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:22:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/891197beb9b38e30f44e080c1a17f561982d626e64d097e5c97b17bc807cffc1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:22:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/891197beb9b38e30f44e080c1a17f561982d626e64d097e5c97b17bc807cffc1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:22:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/891197beb9b38e30f44e080c1a17f561982d626e64d097e5c97b17bc807cffc1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:22:03 compute-0 podman[104162]: 2025-11-24 18:22:03.853667495 +0000 UTC m=+0.109154531 container init 491ec1db538cd1e8468b1c36977b87634a67e67b1f05affc23eb640a8726329e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mcclintock, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:22:03 compute-0 podman[104162]: 2025-11-24 18:22:03.859114171 +0000 UTC m=+0.114601157 container start 491ec1db538cd1e8468b1c36977b87634a67e67b1f05affc23eb640a8726329e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mcclintock, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:22:03 compute-0 podman[104162]: 2025-11-24 18:22:03.764528652 +0000 UTC m=+0.020015658 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:22:03 compute-0 podman[104162]: 2025-11-24 18:22:03.861635413 +0000 UTC m=+0.117122399 container attach 491ec1db538cd1e8468b1c36977b87634a67e67b1f05affc23eb640a8726329e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mcclintock, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 24 18:22:03 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Nov 24 18:22:03 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Nov 24 18:22:03 compute-0 ceph-mon[74927]: pgmap v120: 197 pgs: 197 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 341 B/s wr, 1 op/s
Nov 24 18:22:03 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:22:03 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:22:03 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:22:03 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:22:03 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:22:03 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:22:03 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:22:03 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:22:03 compute-0 ceph-mon[74927]: 3.14 scrub starts
Nov 24 18:22:03 compute-0 ceph-mon[74927]: 4.16 scrub starts
Nov 24 18:22:03 compute-0 ceph-mon[74927]: 4.16 scrub ok
Nov 24 18:22:04 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Nov 24 18:22:04 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Nov 24 18:22:04 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Nov 24 18:22:04 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Nov 24 18:22:04 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v121: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 4.9 KiB/s wr, 157 op/s
Nov 24 18:22:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:22:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:22:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:22:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:22:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:22:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:22:04 compute-0 nostalgic_mcclintock[104178]: --> passed data devices: 0 physical, 3 LVM
Nov 24 18:22:04 compute-0 nostalgic_mcclintock[104178]: --> relative data size: 1.0
Nov 24 18:22:04 compute-0 nostalgic_mcclintock[104178]: --> All data devices are unavailable
Nov 24 18:22:04 compute-0 systemd[1]: libpod-491ec1db538cd1e8468b1c36977b87634a67e67b1f05affc23eb640a8726329e.scope: Deactivated successfully.
Nov 24 18:22:04 compute-0 podman[104162]: 2025-11-24 18:22:04.838360175 +0000 UTC m=+1.093847171 container died 491ec1db538cd1e8468b1c36977b87634a67e67b1f05affc23eb640a8726329e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 18:22:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-891197beb9b38e30f44e080c1a17f561982d626e64d097e5c97b17bc807cffc1-merged.mount: Deactivated successfully.
Nov 24 18:22:04 compute-0 podman[104162]: 2025-11-24 18:22:04.887761512 +0000 UTC m=+1.143248498 container remove 491ec1db538cd1e8468b1c36977b87634a67e67b1f05affc23eb640a8726329e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 18:22:04 compute-0 systemd[1]: libpod-conmon-491ec1db538cd1e8468b1c36977b87634a67e67b1f05affc23eb640a8726329e.scope: Deactivated successfully.
Nov 24 18:22:04 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.17 deep-scrub starts
Nov 24 18:22:04 compute-0 sudo[104054]: pam_unix(sudo:session): session closed for user root
Nov 24 18:22:04 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.17 deep-scrub ok
Nov 24 18:22:04 compute-0 sudo[104219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:22:04 compute-0 sudo[104219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:22:04 compute-0 sudo[104219]: pam_unix(sudo:session): session closed for user root
Nov 24 18:22:04 compute-0 ceph-mon[74927]: 3.14 scrub ok
Nov 24 18:22:04 compute-0 ceph-mon[74927]: 2.10 scrub starts
Nov 24 18:22:04 compute-0 ceph-mon[74927]: 2.10 scrub ok
Nov 24 18:22:04 compute-0 ceph-mon[74927]: 4.17 deep-scrub starts
Nov 24 18:22:04 compute-0 ceph-mon[74927]: 4.17 deep-scrub ok
Nov 24 18:22:05 compute-0 sudo[104244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:22:05 compute-0 sudo[104244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:22:05 compute-0 sudo[104244]: pam_unix(sudo:session): session closed for user root
Nov 24 18:22:05 compute-0 sudo[104269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:22:05 compute-0 sudo[104269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:22:05 compute-0 sudo[104269]: pam_unix(sudo:session): session closed for user root
Nov 24 18:22:05 compute-0 sudo[104294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- lvm list --format json
Nov 24 18:22:05 compute-0 sudo[104294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:22:05 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Nov 24 18:22:05 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Nov 24 18:22:05 compute-0 podman[104360]: 2025-11-24 18:22:05.441350288 +0000 UTC m=+0.038464116 container create 2222d249e3cd850265bdeb4a110bd41451ceb9dcf3fedd8bba43f9137ad77760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:22:05 compute-0 systemd[1]: Started libpod-conmon-2222d249e3cd850265bdeb4a110bd41451ceb9dcf3fedd8bba43f9137ad77760.scope.
Nov 24 18:22:05 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:22:05 compute-0 podman[104360]: 2025-11-24 18:22:05.504771193 +0000 UTC m=+0.101885041 container init 2222d249e3cd850265bdeb4a110bd41451ceb9dcf3fedd8bba43f9137ad77760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_kilby, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 24 18:22:05 compute-0 podman[104360]: 2025-11-24 18:22:05.513201472 +0000 UTC m=+0.110315300 container start 2222d249e3cd850265bdeb4a110bd41451ceb9dcf3fedd8bba43f9137ad77760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 18:22:05 compute-0 priceless_kilby[104376]: 167 167
Nov 24 18:22:05 compute-0 systemd[1]: libpod-2222d249e3cd850265bdeb4a110bd41451ceb9dcf3fedd8bba43f9137ad77760.scope: Deactivated successfully.
Nov 24 18:22:05 compute-0 podman[104360]: 2025-11-24 18:22:05.516798551 +0000 UTC m=+0.113912409 container attach 2222d249e3cd850265bdeb4a110bd41451ceb9dcf3fedd8bba43f9137ad77760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_kilby, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 24 18:22:05 compute-0 podman[104360]: 2025-11-24 18:22:05.517506849 +0000 UTC m=+0.114620687 container died 2222d249e3cd850265bdeb4a110bd41451ceb9dcf3fedd8bba43f9137ad77760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_kilby, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:22:05 compute-0 podman[104360]: 2025-11-24 18:22:05.426576191 +0000 UTC m=+0.023690039 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:22:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec77639aebfd9ce19a2c34cfcc6154e2a7fc005fc11ac66e8ee6548d62960402-merged.mount: Deactivated successfully.
Nov 24 18:22:05 compute-0 podman[104360]: 2025-11-24 18:22:05.551013981 +0000 UTC m=+0.148127809 container remove 2222d249e3cd850265bdeb4a110bd41451ceb9dcf3fedd8bba43f9137ad77760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_kilby, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 18:22:05 compute-0 systemd[1]: libpod-conmon-2222d249e3cd850265bdeb4a110bd41451ceb9dcf3fedd8bba43f9137ad77760.scope: Deactivated successfully.
Nov 24 18:22:05 compute-0 podman[104399]: 2025-11-24 18:22:05.696491123 +0000 UTC m=+0.040468646 container create 5cfe904c5cf2a132ff605439934611cc6e13d0d9f5d0fb6b1f0e1e2a227707fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_johnson, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 18:22:05 compute-0 systemd[1]: Started libpod-conmon-5cfe904c5cf2a132ff605439934611cc6e13d0d9f5d0fb6b1f0e1e2a227707fe.scope.
Nov 24 18:22:05 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:22:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b57294ab0705f143dbee9f54202995fc50e674a8770616e5caff80701a1e686/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:22:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b57294ab0705f143dbee9f54202995fc50e674a8770616e5caff80701a1e686/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:22:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b57294ab0705f143dbee9f54202995fc50e674a8770616e5caff80701a1e686/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:22:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b57294ab0705f143dbee9f54202995fc50e674a8770616e5caff80701a1e686/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:22:05 compute-0 podman[104399]: 2025-11-24 18:22:05.678711022 +0000 UTC m=+0.022688535 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:22:05 compute-0 podman[104399]: 2025-11-24 18:22:05.779064833 +0000 UTC m=+0.123042366 container init 5cfe904c5cf2a132ff605439934611cc6e13d0d9f5d0fb6b1f0e1e2a227707fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_johnson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 18:22:05 compute-0 podman[104399]: 2025-11-24 18:22:05.784851057 +0000 UTC m=+0.128828570 container start 5cfe904c5cf2a132ff605439934611cc6e13d0d9f5d0fb6b1f0e1e2a227707fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_johnson, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:22:05 compute-0 podman[104399]: 2025-11-24 18:22:05.787687197 +0000 UTC m=+0.131664710 container attach 5cfe904c5cf2a132ff605439934611cc6e13d0d9f5d0fb6b1f0e1e2a227707fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_johnson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Nov 24 18:22:05 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Nov 24 18:22:05 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Nov 24 18:22:05 compute-0 ceph-mon[74927]: 3.19 scrub starts
Nov 24 18:22:05 compute-0 ceph-mon[74927]: 3.19 scrub ok
Nov 24 18:22:05 compute-0 ceph-mon[74927]: pgmap v121: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 4.9 KiB/s wr, 157 op/s
Nov 24 18:22:05 compute-0 ceph-mon[74927]: 2.12 scrub starts
Nov 24 18:22:05 compute-0 ceph-mon[74927]: 2.12 scrub ok
Nov 24 18:22:05 compute-0 ceph-mon[74927]: 6.7 scrub starts
Nov 24 18:22:05 compute-0 ceph-mon[74927]: 6.7 scrub ok
Nov 24 18:22:06 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v122: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 4.2 KiB/s wr, 138 op/s
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]: {
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:     "0": [
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:         {
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:             "devices": [
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:                 "/dev/loop3"
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:             ],
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:             "lv_name": "ceph_lv0",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:             "lv_size": "21470642176",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:             "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:             "name": "ceph_lv0",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:             "tags": {
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:                 "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:                 "ceph.cluster_name": "ceph",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:                 "ceph.crush_device_class": "",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:                 "ceph.encrypted": "0",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:                 "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:                 "ceph.osd_id": "0",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:                 "ceph.type": "block",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:                 "ceph.vdo": "0"
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:             },
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:             "type": "block",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:             "vg_name": "ceph_vg0"
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:         }
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:     ],
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:     "1": [
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:         {
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:             "devices": [
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:                 "/dev/loop4"
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:             ],
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:             "lv_name": "ceph_lv1",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:             "lv_size": "21470642176",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:             "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:             "name": "ceph_lv1",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:             "tags": {
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:                 "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:                 "ceph.cluster_name": "ceph",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:                 "ceph.crush_device_class": "",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:                 "ceph.encrypted": "0",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:                 "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:                 "ceph.osd_id": "1",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:                 "ceph.type": "block",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:                 "ceph.vdo": "0"
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:             },
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:             "type": "block",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:             "vg_name": "ceph_vg1"
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:         }
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:     ],
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:     "2": [
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:         {
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:             "devices": [
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:                 "/dev/loop5"
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:             ],
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:             "lv_name": "ceph_lv2",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:             "lv_size": "21470642176",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:             "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:             "name": "ceph_lv2",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:             "tags": {
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:                 "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:                 "ceph.cluster_name": "ceph",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:                 "ceph.crush_device_class": "",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:                 "ceph.encrypted": "0",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:                 "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:                 "ceph.osd_id": "2",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:                 "ceph.type": "block",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:                 "ceph.vdo": "0"
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:             },
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:             "type": "block",
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:             "vg_name": "ceph_vg2"
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:         }
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]:     ]
Nov 24 18:22:06 compute-0 wizardly_johnson[104415]: }
Nov 24 18:22:06 compute-0 systemd[1]: libpod-5cfe904c5cf2a132ff605439934611cc6e13d0d9f5d0fb6b1f0e1e2a227707fe.scope: Deactivated successfully.
Nov 24 18:22:06 compute-0 podman[104399]: 2025-11-24 18:22:06.565012769 +0000 UTC m=+0.908990282 container died 5cfe904c5cf2a132ff605439934611cc6e13d0d9f5d0fb6b1f0e1e2a227707fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 18:22:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b57294ab0705f143dbee9f54202995fc50e674a8770616e5caff80701a1e686-merged.mount: Deactivated successfully.
Nov 24 18:22:06 compute-0 podman[104399]: 2025-11-24 18:22:06.627588153 +0000 UTC m=+0.971565656 container remove 5cfe904c5cf2a132ff605439934611cc6e13d0d9f5d0fb6b1f0e1e2a227707fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 24 18:22:06 compute-0 systemd[1]: libpod-conmon-5cfe904c5cf2a132ff605439934611cc6e13d0d9f5d0fb6b1f0e1e2a227707fe.scope: Deactivated successfully.
Nov 24 18:22:06 compute-0 sudo[104294]: pam_unix(sudo:session): session closed for user root
Nov 24 18:22:06 compute-0 sudo[104438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:22:06 compute-0 sudo[104438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:22:06 compute-0 sudo[104438]: pam_unix(sudo:session): session closed for user root
Nov 24 18:22:06 compute-0 sudo[104463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:22:06 compute-0 sudo[104463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:22:06 compute-0 sudo[104463]: pam_unix(sudo:session): session closed for user root
Nov 24 18:22:06 compute-0 sudo[104488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:22:06 compute-0 sudo[104488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:22:06 compute-0 sudo[104488]: pam_unix(sudo:session): session closed for user root
Nov 24 18:22:06 compute-0 sudo[104513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- raw list --format json
Nov 24 18:22:06 compute-0 sudo[104513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:22:07 compute-0 podman[104578]: 2025-11-24 18:22:07.237052006 +0000 UTC m=+0.039497612 container create 56cdd84248b29d20a7d8f757875ec87fb0fce1fdf5807b0fd73e26a844ab6144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ride, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 18:22:07 compute-0 systemd[1]: Started libpod-conmon-56cdd84248b29d20a7d8f757875ec87fb0fce1fdf5807b0fd73e26a844ab6144.scope.
Nov 24 18:22:07 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:22:07 compute-0 podman[104578]: 2025-11-24 18:22:07.31253126 +0000 UTC m=+0.114976876 container init 56cdd84248b29d20a7d8f757875ec87fb0fce1fdf5807b0fd73e26a844ab6144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ride, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:22:07 compute-0 podman[104578]: 2025-11-24 18:22:07.219701275 +0000 UTC m=+0.022146901 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:22:07 compute-0 podman[104578]: 2025-11-24 18:22:07.324510758 +0000 UTC m=+0.126956364 container start 56cdd84248b29d20a7d8f757875ec87fb0fce1fdf5807b0fd73e26a844ab6144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ride, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:22:07 compute-0 podman[104578]: 2025-11-24 18:22:07.327454171 +0000 UTC m=+0.129899797 container attach 56cdd84248b29d20a7d8f757875ec87fb0fce1fdf5807b0fd73e26a844ab6144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 24 18:22:07 compute-0 nostalgic_ride[104595]: 167 167
Nov 24 18:22:07 compute-0 systemd[1]: libpod-56cdd84248b29d20a7d8f757875ec87fb0fce1fdf5807b0fd73e26a844ab6144.scope: Deactivated successfully.
Nov 24 18:22:07 compute-0 podman[104578]: 2025-11-24 18:22:07.329635675 +0000 UTC m=+0.132081351 container died 56cdd84248b29d20a7d8f757875ec87fb0fce1fdf5807b0fd73e26a844ab6144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ride, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Nov 24 18:22:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-a52b6c68bf98c5bed5d2998a600a4f750315942c2f3abae2c590a79a097b3003-merged.mount: Deactivated successfully.
Nov 24 18:22:07 compute-0 podman[104578]: 2025-11-24 18:22:07.376488869 +0000 UTC m=+0.178934475 container remove 56cdd84248b29d20a7d8f757875ec87fb0fce1fdf5807b0fd73e26a844ab6144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:22:07 compute-0 systemd[1]: libpod-conmon-56cdd84248b29d20a7d8f757875ec87fb0fce1fdf5807b0fd73e26a844ab6144.scope: Deactivated successfully.
Nov 24 18:22:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:22:07 compute-0 podman[104618]: 2025-11-24 18:22:07.537839364 +0000 UTC m=+0.049716434 container create 12145b4d01b1884d5a751a0c1f1cea526b6e8fe860f9f6d0adf406afcf0d4230 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 18:22:07 compute-0 systemd[1]: Started libpod-conmon-12145b4d01b1884d5a751a0c1f1cea526b6e8fe860f9f6d0adf406afcf0d4230.scope.
Nov 24 18:22:07 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:22:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e359084bbc932d8d2aa0b6d1257eb33832f937fb96c97d93483e976fbad258f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:22:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e359084bbc932d8d2aa0b6d1257eb33832f937fb96c97d93483e976fbad258f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:22:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e359084bbc932d8d2aa0b6d1257eb33832f937fb96c97d93483e976fbad258f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:22:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e359084bbc932d8d2aa0b6d1257eb33832f937fb96c97d93483e976fbad258f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:22:07 compute-0 podman[104618]: 2025-11-24 18:22:07.597791253 +0000 UTC m=+0.109668323 container init 12145b4d01b1884d5a751a0c1f1cea526b6e8fe860f9f6d0adf406afcf0d4230 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:22:07 compute-0 podman[104618]: 2025-11-24 18:22:07.607113454 +0000 UTC m=+0.118990524 container start 12145b4d01b1884d5a751a0c1f1cea526b6e8fe860f9f6d0adf406afcf0d4230 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ardinghelli, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:22:07 compute-0 podman[104618]: 2025-11-24 18:22:07.514886074 +0000 UTC m=+0.026763234 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:22:07 compute-0 podman[104618]: 2025-11-24 18:22:07.611040632 +0000 UTC m=+0.122917702 container attach 12145b4d01b1884d5a751a0c1f1cea526b6e8fe860f9f6d0adf406afcf0d4230 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 18:22:08 compute-0 ceph-mon[74927]: pgmap v122: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 4.2 KiB/s wr, 138 op/s
Nov 24 18:22:08 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v123: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.4 KiB/s wr, 110 op/s
Nov 24 18:22:08 compute-0 fervent_ardinghelli[104635]: {
Nov 24 18:22:08 compute-0 fervent_ardinghelli[104635]:     "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 18:22:08 compute-0 fervent_ardinghelli[104635]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:22:08 compute-0 fervent_ardinghelli[104635]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 18:22:08 compute-0 fervent_ardinghelli[104635]:         "osd_id": 0,
Nov 24 18:22:08 compute-0 fervent_ardinghelli[104635]:         "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:22:08 compute-0 fervent_ardinghelli[104635]:         "type": "bluestore"
Nov 24 18:22:08 compute-0 fervent_ardinghelli[104635]:     },
Nov 24 18:22:08 compute-0 fervent_ardinghelli[104635]:     "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 18:22:08 compute-0 fervent_ardinghelli[104635]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:22:08 compute-0 fervent_ardinghelli[104635]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 18:22:08 compute-0 fervent_ardinghelli[104635]:         "osd_id": 1,
Nov 24 18:22:08 compute-0 fervent_ardinghelli[104635]:         "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:22:08 compute-0 fervent_ardinghelli[104635]:         "type": "bluestore"
Nov 24 18:22:08 compute-0 fervent_ardinghelli[104635]:     },
Nov 24 18:22:08 compute-0 fervent_ardinghelli[104635]:     "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 18:22:08 compute-0 fervent_ardinghelli[104635]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:22:08 compute-0 fervent_ardinghelli[104635]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 18:22:08 compute-0 fervent_ardinghelli[104635]:         "osd_id": 2,
Nov 24 18:22:08 compute-0 fervent_ardinghelli[104635]:         "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:22:08 compute-0 fervent_ardinghelli[104635]:         "type": "bluestore"
Nov 24 18:22:08 compute-0 fervent_ardinghelli[104635]:     }
Nov 24 18:22:08 compute-0 fervent_ardinghelli[104635]: }
Nov 24 18:22:08 compute-0 systemd[1]: libpod-12145b4d01b1884d5a751a0c1f1cea526b6e8fe860f9f6d0adf406afcf0d4230.scope: Deactivated successfully.
Nov 24 18:22:08 compute-0 podman[104618]: 2025-11-24 18:22:08.601369232 +0000 UTC m=+1.113246362 container died 12145b4d01b1884d5a751a0c1f1cea526b6e8fe860f9f6d0adf406afcf0d4230 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Nov 24 18:22:08 compute-0 systemd[1]: libpod-12145b4d01b1884d5a751a0c1f1cea526b6e8fe860f9f6d0adf406afcf0d4230.scope: Consumed 1.001s CPU time.
Nov 24 18:22:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-e359084bbc932d8d2aa0b6d1257eb33832f937fb96c97d93483e976fbad258f3-merged.mount: Deactivated successfully.
Nov 24 18:22:08 compute-0 podman[104618]: 2025-11-24 18:22:08.656376628 +0000 UTC m=+1.168253698 container remove 12145b4d01b1884d5a751a0c1f1cea526b6e8fe860f9f6d0adf406afcf0d4230 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 24 18:22:08 compute-0 systemd[1]: libpod-conmon-12145b4d01b1884d5a751a0c1f1cea526b6e8fe860f9f6d0adf406afcf0d4230.scope: Deactivated successfully.
Nov 24 18:22:08 compute-0 sudo[104513]: pam_unix(sudo:session): session closed for user root
Nov 24 18:22:08 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:22:08 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:22:08 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:22:08 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:22:08 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 43dd4794-471b-46d2-bd04-42eff94bb194 does not exist
Nov 24 18:22:08 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 83e476f0-3e38-4402-9bdc-02b40e2eb4a9 does not exist
Nov 24 18:22:08 compute-0 sudo[104680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:22:08 compute-0 sudo[104680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:22:08 compute-0 sudo[104680]: pam_unix(sudo:session): session closed for user root
Nov 24 18:22:08 compute-0 sudo[104705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:22:08 compute-0 sudo[104705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:22:08 compute-0 sudo[104705]: pam_unix(sudo:session): session closed for user root
Nov 24 18:22:08 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.19 deep-scrub starts
Nov 24 18:22:08 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.19 deep-scrub ok
Nov 24 18:22:09 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Nov 24 18:22:09 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Nov 24 18:22:09 compute-0 ceph-mon[74927]: pgmap v123: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.4 KiB/s wr, 110 op/s
Nov 24 18:22:09 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:22:09 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:22:09 compute-0 ceph-mon[74927]: 4.19 deep-scrub starts
Nov 24 18:22:09 compute-0 ceph-mon[74927]: 4.19 deep-scrub ok
Nov 24 18:22:09 compute-0 ceph-mon[74927]: 2.14 scrub starts
Nov 24 18:22:09 compute-0 ceph-mon[74927]: 2.14 scrub ok
Nov 24 18:22:10 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.1a deep-scrub starts
Nov 24 18:22:10 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.1a deep-scrub ok
Nov 24 18:22:10 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v124: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 2.9 KiB/s wr, 99 op/s
Nov 24 18:22:10 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Nov 24 18:22:10 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Nov 24 18:22:11 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Nov 24 18:22:11 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Nov 24 18:22:11 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Nov 24 18:22:11 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Nov 24 18:22:11 compute-0 ceph-mon[74927]: 3.1a deep-scrub starts
Nov 24 18:22:11 compute-0 ceph-mon[74927]: 3.1a deep-scrub ok
Nov 24 18:22:11 compute-0 ceph-mon[74927]: pgmap v124: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 2.9 KiB/s wr, 99 op/s
Nov 24 18:22:11 compute-0 ceph-mon[74927]: 6.9 scrub starts
Nov 24 18:22:11 compute-0 ceph-mon[74927]: 6.9 scrub ok
Nov 24 18:22:11 compute-0 ceph-mon[74927]: 2.1a scrub starts
Nov 24 18:22:11 compute-0 ceph-mon[74927]: 2.1a scrub ok
Nov 24 18:22:12 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 2.1e deep-scrub starts
Nov 24 18:22:12 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 2.1e deep-scrub ok
Nov 24 18:22:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:22:12 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v125: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.7 KiB/s wr, 91 op/s
Nov 24 18:22:12 compute-0 ceph-mon[74927]: 3.1c scrub starts
Nov 24 18:22:12 compute-0 ceph-mon[74927]: 3.1c scrub ok
Nov 24 18:22:12 compute-0 ceph-mon[74927]: 2.1e deep-scrub starts
Nov 24 18:22:12 compute-0 ceph-mon[74927]: 2.1e deep-scrub ok
Nov 24 18:22:12 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.a scrub starts
Nov 24 18:22:12 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.a scrub ok
Nov 24 18:22:13 compute-0 ceph-mon[74927]: pgmap v125: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.7 KiB/s wr, 91 op/s
Nov 24 18:22:13 compute-0 ceph-mon[74927]: 6.a scrub starts
Nov 24 18:22:13 compute-0 ceph-mon[74927]: 6.a scrub ok
Nov 24 18:22:13 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Nov 24 18:22:13 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Nov 24 18:22:14 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v126: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.7 KiB/s wr, 91 op/s
Nov 24 18:22:14 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.7 deep-scrub starts
Nov 24 18:22:14 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.7 deep-scrub ok
Nov 24 18:22:14 compute-0 ceph-mon[74927]: 4.1d scrub starts
Nov 24 18:22:14 compute-0 ceph-mon[74927]: 4.1d scrub ok
Nov 24 18:22:14 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Nov 24 18:22:14 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Nov 24 18:22:15 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Nov 24 18:22:15 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Nov 24 18:22:15 compute-0 ceph-mon[74927]: pgmap v126: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.7 KiB/s wr, 91 op/s
Nov 24 18:22:15 compute-0 ceph-mon[74927]: 7.7 deep-scrub starts
Nov 24 18:22:15 compute-0 ceph-mon[74927]: 7.7 deep-scrub ok
Nov 24 18:22:15 compute-0 ceph-mon[74927]: 4.1e scrub starts
Nov 24 18:22:15 compute-0 ceph-mon[74927]: 5.6 scrub starts
Nov 24 18:22:15 compute-0 ceph-mon[74927]: 5.6 scrub ok
Nov 24 18:22:15 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Nov 24 18:22:15 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Nov 24 18:22:16 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v127: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:22:16 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.b deep-scrub starts
Nov 24 18:22:16 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.b deep-scrub ok
Nov 24 18:22:16 compute-0 ceph-mon[74927]: 4.1e scrub ok
Nov 24 18:22:16 compute-0 ceph-mon[74927]: 4.1f scrub starts
Nov 24 18:22:16 compute-0 ceph-mon[74927]: 4.1f scrub ok
Nov 24 18:22:16 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.10 scrub starts
Nov 24 18:22:16 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.10 scrub ok
Nov 24 18:22:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:22:17 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.d scrub starts
Nov 24 18:22:17 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.d scrub ok
Nov 24 18:22:17 compute-0 ceph-mon[74927]: pgmap v127: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:22:17 compute-0 ceph-mon[74927]: 7.b deep-scrub starts
Nov 24 18:22:17 compute-0 ceph-mon[74927]: 7.b deep-scrub ok
Nov 24 18:22:17 compute-0 ceph-mon[74927]: 6.10 scrub starts
Nov 24 18:22:17 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.12 scrub starts
Nov 24 18:22:17 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.12 scrub ok
Nov 24 18:22:18 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v128: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:22:18 compute-0 ceph-mon[74927]: 6.10 scrub ok
Nov 24 18:22:18 compute-0 ceph-mon[74927]: 7.d scrub starts
Nov 24 18:22:18 compute-0 ceph-mon[74927]: 7.d scrub ok
Nov 24 18:22:18 compute-0 ceph-mon[74927]: 6.12 scrub starts
Nov 24 18:22:18 compute-0 ceph-mon[74927]: 6.12 scrub ok
Nov 24 18:22:19 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Nov 24 18:22:19 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Nov 24 18:22:19 compute-0 ceph-mon[74927]: pgmap v128: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:22:19 compute-0 ceph-mon[74927]: 5.8 scrub starts
Nov 24 18:22:19 compute-0 ceph-mon[74927]: 5.8 scrub ok
Nov 24 18:22:20 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v129: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:22:21 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Nov 24 18:22:21 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Nov 24 18:22:21 compute-0 ceph-mon[74927]: pgmap v129: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:22:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:22:22 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v130: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:22:22 compute-0 ceph-mon[74927]: 7.10 scrub starts
Nov 24 18:22:22 compute-0 ceph-mon[74927]: 7.10 scrub ok
Nov 24 18:22:23 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.a scrub starts
Nov 24 18:22:23 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.a scrub ok
Nov 24 18:22:23 compute-0 ceph-mon[74927]: pgmap v130: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:22:23 compute-0 ceph-mon[74927]: 5.a scrub starts
Nov 24 18:22:23 compute-0 ceph-mon[74927]: 5.a scrub ok
Nov 24 18:22:23 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Nov 24 18:22:23 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Nov 24 18:22:24 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.b scrub starts
Nov 24 18:22:24 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.b scrub ok
Nov 24 18:22:24 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v131: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:22:24 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.12 deep-scrub starts
Nov 24 18:22:24 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.12 deep-scrub ok
Nov 24 18:22:24 compute-0 ceph-mon[74927]: 6.16 scrub starts
Nov 24 18:22:24 compute-0 ceph-mon[74927]: 6.16 scrub ok
Nov 24 18:22:24 compute-0 ceph-mon[74927]: 5.b scrub starts
Nov 24 18:22:24 compute-0 ceph-mon[74927]: 5.b scrub ok
Nov 24 18:22:25 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Nov 24 18:22:25 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Nov 24 18:22:25 compute-0 ceph-mon[74927]: pgmap v131: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:22:25 compute-0 ceph-mon[74927]: 7.12 deep-scrub starts
Nov 24 18:22:25 compute-0 ceph-mon[74927]: 7.12 deep-scrub ok
Nov 24 18:22:25 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.18 deep-scrub starts
Nov 24 18:22:25 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.18 deep-scrub ok
Nov 24 18:22:26 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v132: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:22:26 compute-0 ceph-mon[74927]: 7.14 scrub starts
Nov 24 18:22:26 compute-0 ceph-mon[74927]: 7.14 scrub ok
Nov 24 18:22:26 compute-0 ceph-mon[74927]: 6.18 deep-scrub starts
Nov 24 18:22:26 compute-0 ceph-mon[74927]: 6.18 deep-scrub ok
Nov 24 18:22:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:22:27 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.19 scrub starts
Nov 24 18:22:27 compute-0 ceph-mon[74927]: pgmap v132: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:22:27 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.19 scrub ok
Nov 24 18:22:28 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v133: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:22:28 compute-0 ceph-mon[74927]: 6.19 scrub starts
Nov 24 18:22:28 compute-0 ceph-mon[74927]: 6.19 scrub ok
Nov 24 18:22:29 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.d scrub starts
Nov 24 18:22:29 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.d scrub ok
Nov 24 18:22:29 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Nov 24 18:22:29 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Nov 24 18:22:29 compute-0 ceph-mon[74927]: pgmap v133: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:22:29 compute-0 ceph-mon[74927]: 5.d scrub starts
Nov 24 18:22:29 compute-0 ceph-mon[74927]: 5.d scrub ok
Nov 24 18:22:30 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.e scrub starts
Nov 24 18:22:30 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.e scrub ok
Nov 24 18:22:30 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v134: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:22:30 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.17 deep-scrub starts
Nov 24 18:22:30 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.17 deep-scrub ok
Nov 24 18:22:30 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.1a scrub starts
Nov 24 18:22:30 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.1a scrub ok
Nov 24 18:22:30 compute-0 ceph-mon[74927]: 7.16 scrub starts
Nov 24 18:22:30 compute-0 ceph-mon[74927]: 7.16 scrub ok
Nov 24 18:22:30 compute-0 ceph-mon[74927]: 5.e scrub starts
Nov 24 18:22:30 compute-0 ceph-mon[74927]: 5.e scrub ok
Nov 24 18:22:31 compute-0 ceph-mon[74927]: pgmap v134: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:22:31 compute-0 ceph-mon[74927]: 7.17 deep-scrub starts
Nov 24 18:22:31 compute-0 ceph-mon[74927]: 7.17 deep-scrub ok
Nov 24 18:22:31 compute-0 ceph-mon[74927]: 6.1a scrub starts
Nov 24 18:22:31 compute-0 ceph-mon[74927]: 6.1a scrub ok
Nov 24 18:22:32 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Nov 24 18:22:32 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Nov 24 18:22:32 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:22:32 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v135: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:22:32 compute-0 ceph-mon[74927]: 5.10 scrub starts
Nov 24 18:22:32 compute-0 ceph-mon[74927]: 5.10 scrub ok
Nov 24 18:22:33 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.1b scrub starts
Nov 24 18:22:33 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.1b scrub ok
Nov 24 18:22:33 compute-0 ceph-mon[74927]: pgmap v135: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:22:34 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Nov 24 18:22:34 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Nov 24 18:22:34 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v136: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:22:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:22:34
Nov 24 18:22:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:22:34 compute-0 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 18:22:34 compute-0 ceph-mgr[75218]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'volumes', 'default.rgw.log', 'default.rgw.control', 'vms', 'backups', 'default.rgw.meta', 'images']
Nov 24 18:22:34 compute-0 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 18:22:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:22:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:22:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:22:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:22:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:22:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:22:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:22:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:22:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:22:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:22:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:22:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:22:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:22:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:22:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:22:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:22:34 compute-0 ceph-mon[74927]: 6.1b scrub starts
Nov 24 18:22:34 compute-0 ceph-mon[74927]: 6.1b scrub ok
Nov 24 18:22:34 compute-0 ceph-mon[74927]: 5.17 scrub starts
Nov 24 18:22:34 compute-0 ceph-mon[74927]: 5.17 scrub ok
Nov 24 18:22:35 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Nov 24 18:22:35 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Nov 24 18:22:35 compute-0 ceph-mon[74927]: pgmap v136: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:22:36 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v137: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:22:36 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Nov 24 18:22:36 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Nov 24 18:22:36 compute-0 ceph-mon[74927]: 7.19 scrub starts
Nov 24 18:22:36 compute-0 ceph-mon[74927]: 7.19 scrub ok
Nov 24 18:22:36 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.1b deep-scrub starts
Nov 24 18:22:36 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.1b deep-scrub ok
Nov 24 18:22:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:22:37 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Nov 24 18:22:37 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Nov 24 18:22:37 compute-0 ceph-mon[74927]: pgmap v137: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:22:37 compute-0 ceph-mon[74927]: 7.1d scrub starts
Nov 24 18:22:37 compute-0 ceph-mon[74927]: 7.1d scrub ok
Nov 24 18:22:37 compute-0 ceph-mon[74927]: 5.1b deep-scrub starts
Nov 24 18:22:37 compute-0 ceph-mon[74927]: 5.1b deep-scrub ok
Nov 24 18:22:37 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Nov 24 18:22:37 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Nov 24 18:22:38 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v138: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:22:38 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.19 deep-scrub starts
Nov 24 18:22:38 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.19 deep-scrub ok
Nov 24 18:22:38 compute-0 ceph-mon[74927]: 7.1e scrub starts
Nov 24 18:22:38 compute-0 ceph-mon[74927]: 7.1e scrub ok
Nov 24 18:22:38 compute-0 ceph-mon[74927]: 5.1c scrub starts
Nov 24 18:22:38 compute-0 ceph-mon[74927]: 5.1c scrub ok
Nov 24 18:22:39 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:22:39 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:22:39 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:22:39 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:22:39 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:22:39 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:22:39 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:22:39 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:22:39 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:22:39 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:22:39 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:22:39 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:22:39 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 18:22:39 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:22:39 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:22:39 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:22:39 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 1)
Nov 24 18:22:39 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:22:39 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Nov 24 18:22:39 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:22:39 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 24 18:22:39 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:22:39 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Nov 24 18:22:39 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Nov 24 18:22:39 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 18:22:39 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.17 scrub starts
Nov 24 18:22:39 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.17 scrub ok
Nov 24 18:22:39 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Nov 24 18:22:39 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Nov 24 18:22:39 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Nov 24 18:22:39 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Nov 24 18:22:39 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Nov 24 18:22:39 compute-0 ceph-mon[74927]: pgmap v138: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:22:39 compute-0 ceph-mon[74927]: 5.19 deep-scrub starts
Nov 24 18:22:39 compute-0 ceph-mon[74927]: 5.19 deep-scrub ok
Nov 24 18:22:39 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 18:22:39 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 24 18:22:39 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Nov 24 18:22:39 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Nov 24 18:22:39 compute-0 ceph-mgr[75218]: [progress INFO root] update: starting ev 30905073-f488-4412-b1f6-76b8a4219cbb (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 24 18:22:39 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Nov 24 18:22:39 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 18:22:40 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v140: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:22:40 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 24 18:22:40 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 18:22:40 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Nov 24 18:22:40 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 24 18:22:40 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 18:22:40 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Nov 24 18:22:40 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Nov 24 18:22:40 compute-0 ceph-mgr[75218]: [progress INFO root] update: starting ev c47872ff-514b-42a8-89af-e3703917dc10 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 24 18:22:40 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Nov 24 18:22:40 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 18:22:40 compute-0 ceph-mon[74927]: 6.17 scrub starts
Nov 24 18:22:40 compute-0 ceph-mon[74927]: 6.17 scrub ok
Nov 24 18:22:40 compute-0 ceph-mon[74927]: 5.1e scrub starts
Nov 24 18:22:40 compute-0 ceph-mon[74927]: 5.1e scrub ok
Nov 24 18:22:40 compute-0 ceph-mon[74927]: 5.1f scrub starts
Nov 24 18:22:40 compute-0 ceph-mon[74927]: 5.1f scrub ok
Nov 24 18:22:40 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 24 18:22:40 compute-0 ceph-mon[74927]: osdmap e56: 3 total, 3 up, 3 in
Nov 24 18:22:40 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 18:22:40 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 18:22:41 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.14 deep-scrub starts
Nov 24 18:22:41 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.14 deep-scrub ok
Nov 24 18:22:41 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Nov 24 18:22:41 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 24 18:22:41 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Nov 24 18:22:41 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Nov 24 18:22:41 compute-0 ceph-mgr[75218]: [progress INFO root] update: starting ev d697af87-958c-4d1a-ba59-ba26ca059987 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 24 18:22:41 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Nov 24 18:22:41 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 18:22:41 compute-0 ceph-mon[74927]: pgmap v140: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:22:41 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 24 18:22:41 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 18:22:41 compute-0 ceph-mon[74927]: osdmap e57: 3 total, 3 up, 3 in
Nov 24 18:22:41 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 18:22:41 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 24 18:22:41 compute-0 ceph-mon[74927]: osdmap e58: 3 total, 3 up, 3 in
Nov 24 18:22:41 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 18:22:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:22:42 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v143: 228 pgs: 31 unknown, 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:22:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 24 18:22:42 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 18:22:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 24 18:22:42 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 57 pg[8.0( v 48'4 (0'0,48'4] local-lis/les=47/48 n=4 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=13.580782890s) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 48'3 mlcod 48'3 active pruub 135.933990479s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.0( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=13.580782890s) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 48'3 mlcod 0'0 unknown pruub 135.933990479s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.5( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.2( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=1 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.e( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.f( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.11( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.13( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.4( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=1 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.12( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.1( v 48'4 (0'0,48'4] local-lis/les=47/48 n=1 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.10( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.7( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.9( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.8( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.3( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=1 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.19( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.14( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.a( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.16( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.c( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.d( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.17( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.1a( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.b( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.18( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.15( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.1b( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.6( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.1c( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.1d( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.1e( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.1f( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:42 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Nov 24 18:22:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Nov 24 18:22:42 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 24 18:22:42 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 18:22:42 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 18:22:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Nov 24 18:22:42 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Nov 24 18:22:42 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Nov 24 18:22:42 compute-0 ceph-mgr[75218]: [progress INFO root] update: starting ev 6c278b28-0bab-4e25-b4c2-6f4165c1702c (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 24 18:22:42 compute-0 ceph-mgr[75218]: [progress INFO root] complete: finished ev 30905073-f488-4412-b1f6-76b8a4219cbb (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 24 18:22:42 compute-0 ceph-mgr[75218]: [progress INFO root] Completed event 30905073-f488-4412-b1f6-76b8a4219cbb (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Nov 24 18:22:42 compute-0 ceph-mgr[75218]: [progress INFO root] complete: finished ev c47872ff-514b-42a8-89af-e3703917dc10 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 24 18:22:42 compute-0 ceph-mgr[75218]: [progress INFO root] Completed event c47872ff-514b-42a8-89af-e3703917dc10 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Nov 24 18:22:42 compute-0 ceph-mgr[75218]: [progress INFO root] complete: finished ev d697af87-958c-4d1a-ba59-ba26ca059987 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 24 18:22:42 compute-0 ceph-mgr[75218]: [progress INFO root] Completed event d697af87-958c-4d1a-ba59-ba26ca059987 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Nov 24 18:22:42 compute-0 ceph-mgr[75218]: [progress INFO root] complete: finished ev 6c278b28-0bab-4e25-b4c2-6f4165c1702c (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 24 18:22:42 compute-0 ceph-mgr[75218]: [progress INFO root] Completed event 6c278b28-0bab-4e25-b4c2-6f4165c1702c (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[9.0( v 55'385 (0'0,55'385] local-lis/les=49/50 n=177 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=59 pruub=15.371401787s) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 55'384 mlcod 55'384 active pruub 137.947372437s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:42 compute-0 ceph-mon[74927]: 4.14 deep-scrub starts
Nov 24 18:22:42 compute-0 ceph-mon[74927]: 4.14 deep-scrub ok
Nov 24 18:22:42 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 18:22:42 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 18:22:42 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 24 18:22:42 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 18:22:42 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 18:22:42 compute-0 ceph-mon[74927]: osdmap e59: 3 total, 3 up, 3 in
Nov 24 18:22:42 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 59 pg[10.0( v 52'16 (0'0,52'16] local-lis/les=51/52 n=8 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=9.499588013s) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 52'15 mlcod 52'15 active pruub 124.561424255s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.14( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.16( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[9.0( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=5 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=59 pruub=15.371401787s) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 55'384 mlcod 0'0 unknown pruub 137.947372437s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:42 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 59 pg[10.0( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=9.499588013s) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 52'15 mlcod 0'0 unknown pruub 124.561424255s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.17( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.10( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.15( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.3( v 48'4 (0'0,48'4] local-lis/les=57/59 n=1 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.1( v 48'4 (0'0,48'4] local-lis/les=57/59 n=1 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.c( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.e( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.2( v 48'4 (0'0,48'4] local-lis/les=57/59 n=1 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.d( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.8( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.b( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.f( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.9( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.0( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 48'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.6( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.5( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.a( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.1b( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.7( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.19( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.1f( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.18( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.4( v 48'4 (0'0,48'4] local-lis/les=57/59 n=1 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.1e( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.1d( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.1c( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.13( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.12( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.1a( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:42 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.11( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:43 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Nov 24 18:22:43 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Nov 24 18:22:43 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.15( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.14( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.17( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.16( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.1e( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.d( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.1b( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.a( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.b( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.13( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.12( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.11( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.11( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.3( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.2( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.10( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-mon[74927]: pgmap v143: 228 pgs: 31 unknown, 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:22:44 compute-0 ceph-mon[74927]: 4.18 scrub starts
Nov 24 18:22:44 compute-0 ceph-mon[74927]: 4.18 scrub ok
Nov 24 18:22:44 compute-0 ceph-mon[74927]: osdmap e60: 3 total, 3 up, 3 in
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.1f( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.1d( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.1c( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.d( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.1a( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.19( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.18( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.7( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.c( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.6( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.9( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.8( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.5( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.f( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.4( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.9( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.f( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.c( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.1( v 52'16 (0'0,52'16] local-lis/les=51/52 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.e( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.3( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.e( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.a( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.14( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.b( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.8( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.15( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.2( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.16( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.17( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.6( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.1( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.7( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.4( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.1a( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.5( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.18( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.19( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.1c( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.1e( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.1f( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.1d( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.12( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.13( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.10( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.1b( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.14( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.1b( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.b( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.a( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.12( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.13( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.1f( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.11( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.1d( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.1c( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.10( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.1a( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.18( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.19( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.7( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.d( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.6( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.8( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.5( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.1e( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.f( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.9( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.c( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.0( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 52'15 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.1( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.e( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.15( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.3( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.14( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.16( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.4( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.11( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.0( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 55'384 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.d( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.17( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.9( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.2( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.3( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.2( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.a( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.1( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.b( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.1a( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.5( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.4( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.12( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.1d( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.1b( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.10( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:44 compute-0 sudo[104753]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pehqkdndhwwxjjthytuzoafylkdzpqkr ; /usr/bin/python3'
Nov 24 18:22:44 compute-0 sudo[104753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:22:44 compute-0 python3[104755]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:22:44 compute-0 podman[104756]: 2025-11-24 18:22:44.265971468 +0000 UTC m=+0.042157219 container create 5a4bdd43126812557beb0f0406bf08cd1c825fc83237268e118e83694d1c65f6 (image=quay.io/ceph/ceph:v18, name=mystifying_meitner, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 18:22:44 compute-0 systemd[1]: Started libpod-conmon-5a4bdd43126812557beb0f0406bf08cd1c825fc83237268e118e83694d1c65f6.scope.
Nov 24 18:22:44 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:22:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bd176ebf21184e910650fa7eaa69474aef7f7eee92f2a833c9327e4418d2a00/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:22:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bd176ebf21184e910650fa7eaa69474aef7f7eee92f2a833c9327e4418d2a00/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:22:44 compute-0 podman[104756]: 2025-11-24 18:22:44.327856267 +0000 UTC m=+0.104042038 container init 5a4bdd43126812557beb0f0406bf08cd1c825fc83237268e118e83694d1c65f6 (image=quay.io/ceph/ceph:v18, name=mystifying_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:22:44 compute-0 podman[104756]: 2025-11-24 18:22:44.333295631 +0000 UTC m=+0.109481372 container start 5a4bdd43126812557beb0f0406bf08cd1c825fc83237268e118e83694d1c65f6 (image=quay.io/ceph/ceph:v18, name=mystifying_meitner, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Nov 24 18:22:44 compute-0 podman[104756]: 2025-11-24 18:22:44.336308637 +0000 UTC m=+0.112494378 container attach 5a4bdd43126812557beb0f0406bf08cd1c825fc83237268e118e83694d1c65f6 (image=quay.io/ceph/ceph:v18, name=mystifying_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Nov 24 18:22:44 compute-0 podman[104756]: 2025-11-24 18:22:44.245674421 +0000 UTC m=+0.021860212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:22:44 compute-0 mystifying_meitner[104771]: could not fetch user info: no user info saved
Nov 24 18:22:44 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v146: 290 pgs: 1 peering, 31 unknown, 258 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:22:44 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 24 18:22:44 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 18:22:44 compute-0 systemd[1]: libpod-5a4bdd43126812557beb0f0406bf08cd1c825fc83237268e118e83694d1c65f6.scope: Deactivated successfully.
Nov 24 18:22:44 compute-0 conmon[104771]: conmon 5a4bdd43126812557beb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5a4bdd43126812557beb0f0406bf08cd1c825fc83237268e118e83694d1c65f6.scope/container/memory.events
Nov 24 18:22:44 compute-0 podman[104756]: 2025-11-24 18:22:44.530841516 +0000 UTC m=+0.307027317 container died 5a4bdd43126812557beb0f0406bf08cd1c825fc83237268e118e83694d1c65f6 (image=quay.io/ceph/ceph:v18, name=mystifying_meitner, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Nov 24 18:22:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bd176ebf21184e910650fa7eaa69474aef7f7eee92f2a833c9327e4418d2a00-merged.mount: Deactivated successfully.
Nov 24 18:22:44 compute-0 podman[104756]: 2025-11-24 18:22:44.563584507 +0000 UTC m=+0.339770248 container remove 5a4bdd43126812557beb0f0406bf08cd1c825fc83237268e118e83694d1c65f6 (image=quay.io/ceph/ceph:v18, name=mystifying_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:22:44 compute-0 systemd[1]: libpod-conmon-5a4bdd43126812557beb0f0406bf08cd1c825fc83237268e118e83694d1c65f6.scope: Deactivated successfully.
Nov 24 18:22:44 compute-0 sudo[104753]: pam_unix(sudo:session): session closed for user root
Nov 24 18:22:44 compute-0 sudo[104892]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uivoqwesuamskvvcfpvheiajlakrlrpa ; /usr/bin/python3'
Nov 24 18:22:44 compute-0 sudo[104892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:22:44 compute-0 ceph-mgr[75218]: [progress INFO root] Writing back 15 completed events
Nov 24 18:22:44 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 24 18:22:44 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:22:44 compute-0 python3[104894]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:22:44 compute-0 podman[104895]: 2025-11-24 18:22:44.933876881 +0000 UTC m=+0.050209248 container create 4ab2831b6ba41edec09d634a9a15c4890419b3ff8219b00949ff2c43d0deead9 (image=quay.io/ceph/ceph:v18, name=vigilant_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:22:44 compute-0 systemd[1]: Started libpod-conmon-4ab2831b6ba41edec09d634a9a15c4890419b3ff8219b00949ff2c43d0deead9.scope.
Nov 24 18:22:44 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:22:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e79ca7b5696c2d7f54f08e5ee4fd486472ff93bee4cb53813cc1d4baf83a631e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:22:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e79ca7b5696c2d7f54f08e5ee4fd486472ff93bee4cb53813cc1d4baf83a631e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:22:44 compute-0 podman[104895]: 2025-11-24 18:22:44.995159633 +0000 UTC m=+0.111492010 container init 4ab2831b6ba41edec09d634a9a15c4890419b3ff8219b00949ff2c43d0deead9 (image=quay.io/ceph/ceph:v18, name=vigilant_franklin, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:22:45 compute-0 podman[104895]: 2025-11-24 18:22:45.000814224 +0000 UTC m=+0.117146631 container start 4ab2831b6ba41edec09d634a9a15c4890419b3ff8219b00949ff2c43d0deead9 (image=quay.io/ceph/ceph:v18, name=vigilant_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:22:45 compute-0 podman[104895]: 2025-11-24 18:22:45.00419456 +0000 UTC m=+0.120526927 container attach 4ab2831b6ba41edec09d634a9a15c4890419b3ff8219b00949ff2c43d0deead9 (image=quay.io/ceph/ceph:v18, name=vigilant_franklin, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 18:22:45 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Nov 24 18:22:45 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 18:22:45 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:22:45 compute-0 podman[104895]: 2025-11-24 18:22:44.91978032 +0000 UTC m=+0.036112697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 18:22:45 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 18:22:45 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Nov 24 18:22:45 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Nov 24 18:22:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 61 pg[11.0( empty local-lis/les=53/54 n=0 ec=53/53 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=9.445177078s) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active pruub 134.109207153s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:45 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 61 pg[11.0( empty local-lis/les=53/54 n=0 ec=53/53 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=9.445177078s) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown pruub 134.109207153s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:45 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Nov 24 18:22:45 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Nov 24 18:22:46 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Nov 24 18:22:46 compute-0 ceph-mon[74927]: pgmap v146: 290 pgs: 1 peering, 31 unknown, 258 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:22:46 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 18:22:46 compute-0 ceph-mon[74927]: osdmap e61: 3 total, 3 up, 3 in
Nov 24 18:22:46 compute-0 ceph-mon[74927]: 2.19 scrub starts
Nov 24 18:22:46 compute-0 ceph-mon[74927]: 2.19 scrub ok
Nov 24 18:22:46 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Nov 24 18:22:46 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.16( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.17( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.15( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.14( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.13( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.2( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.1( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.f( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.e( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.d( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.b( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.9( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.c( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.8( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.a( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.3( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.4( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.5( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.6( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.7( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.18( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.1a( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.1b( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.1d( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.1e( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.1f( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.10( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.11( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.1c( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.12( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.19( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.17( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.16( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.15( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.14( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.1( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.2( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.13( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.0( empty local-lis/les=61/62 n=0 ec=53/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.b( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.f( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.9( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.c( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.a( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.3( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.8( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.4( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.5( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.7( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.18( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.1a( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.6( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.1b( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.1d( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.1e( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.1f( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.11( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.10( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.12( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.1c( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.e( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.19( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:46 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.d( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:46 compute-0 vigilant_franklin[104908]: {
Nov 24 18:22:46 compute-0 vigilant_franklin[104908]:     "user_id": "openstack",
Nov 24 18:22:46 compute-0 vigilant_franklin[104908]:     "display_name": "openstack",
Nov 24 18:22:46 compute-0 vigilant_franklin[104908]:     "email": "",
Nov 24 18:22:46 compute-0 vigilant_franklin[104908]:     "suspended": 0,
Nov 24 18:22:46 compute-0 vigilant_franklin[104908]:     "max_buckets": 1000,
Nov 24 18:22:46 compute-0 vigilant_franklin[104908]:     "subusers": [],
Nov 24 18:22:46 compute-0 vigilant_franklin[104908]:     "keys": [
Nov 24 18:22:46 compute-0 vigilant_franklin[104908]:         {
Nov 24 18:22:46 compute-0 vigilant_franklin[104908]:             "user": "openstack",
Nov 24 18:22:46 compute-0 vigilant_franklin[104908]:             "access_key": "AUTOF8MRD5G1EGMX38JK",
Nov 24 18:22:46 compute-0 vigilant_franklin[104908]:             "secret_key": "jApM1ACuLGnfBFuI1u30xQJLvdOWiGTlf0zmyl5B"
Nov 24 18:22:46 compute-0 vigilant_franklin[104908]:         }
Nov 24 18:22:46 compute-0 vigilant_franklin[104908]:     ],
Nov 24 18:22:46 compute-0 vigilant_franklin[104908]:     "swift_keys": [],
Nov 24 18:22:46 compute-0 vigilant_franklin[104908]:     "caps": [],
Nov 24 18:22:46 compute-0 vigilant_franklin[104908]:     "op_mask": "read, write, delete",
Nov 24 18:22:46 compute-0 vigilant_franklin[104908]:     "default_placement": "",
Nov 24 18:22:46 compute-0 vigilant_franklin[104908]:     "default_storage_class": "",
Nov 24 18:22:46 compute-0 vigilant_franklin[104908]:     "placement_tags": [],
Nov 24 18:22:46 compute-0 vigilant_franklin[104908]:     "bucket_quota": {
Nov 24 18:22:46 compute-0 vigilant_franklin[104908]:         "enabled": false,
Nov 24 18:22:46 compute-0 vigilant_franklin[104908]:         "check_on_raw": false,
Nov 24 18:22:46 compute-0 vigilant_franklin[104908]:         "max_size": -1,
Nov 24 18:22:46 compute-0 vigilant_franklin[104908]:         "max_size_kb": 0,
Nov 24 18:22:46 compute-0 vigilant_franklin[104908]:         "max_objects": -1
Nov 24 18:22:46 compute-0 vigilant_franklin[104908]:     },
Nov 24 18:22:46 compute-0 vigilant_franklin[104908]:     "user_quota": {
Nov 24 18:22:46 compute-0 vigilant_franklin[104908]:         "enabled": false,
Nov 24 18:22:46 compute-0 vigilant_franklin[104908]:         "check_on_raw": false,
Nov 24 18:22:46 compute-0 vigilant_franklin[104908]:         "max_size": -1,
Nov 24 18:22:46 compute-0 vigilant_franklin[104908]:         "max_size_kb": 0,
Nov 24 18:22:46 compute-0 vigilant_franklin[104908]:         "max_objects": -1
Nov 24 18:22:46 compute-0 vigilant_franklin[104908]:     },
Nov 24 18:22:46 compute-0 vigilant_franklin[104908]:     "temp_url_keys": [],
Nov 24 18:22:46 compute-0 vigilant_franklin[104908]:     "type": "rgw",
Nov 24 18:22:46 compute-0 vigilant_franklin[104908]:     "mfa_ids": []
Nov 24 18:22:46 compute-0 vigilant_franklin[104908]: }
Nov 24 18:22:46 compute-0 vigilant_franklin[104908]: 
Nov 24 18:22:46 compute-0 systemd[1]: libpod-4ab2831b6ba41edec09d634a9a15c4890419b3ff8219b00949ff2c43d0deead9.scope: Deactivated successfully.
Nov 24 18:22:46 compute-0 podman[104895]: 2025-11-24 18:22:46.174172173 +0000 UTC m=+1.290504540 container died 4ab2831b6ba41edec09d634a9a15c4890419b3ff8219b00949ff2c43d0deead9 (image=quay.io/ceph/ceph:v18, name=vigilant_franklin, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 18:22:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-e79ca7b5696c2d7f54f08e5ee4fd486472ff93bee4cb53813cc1d4baf83a631e-merged.mount: Deactivated successfully.
Nov 24 18:22:46 compute-0 podman[104895]: 2025-11-24 18:22:46.226974104 +0000 UTC m=+1.343306471 container remove 4ab2831b6ba41edec09d634a9a15c4890419b3ff8219b00949ff2c43d0deead9 (image=quay.io/ceph/ceph:v18, name=vigilant_franklin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 24 18:22:46 compute-0 systemd[1]: libpod-conmon-4ab2831b6ba41edec09d634a9a15c4890419b3ff8219b00949ff2c43d0deead9.scope: Deactivated successfully.
Nov 24 18:22:46 compute-0 sudo[104892]: pam_unix(sudo:session): session closed for user root
Nov 24 18:22:46 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v149: 321 pgs: 1 peering, 62 unknown, 258 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:22:47 compute-0 ceph-mon[74927]: osdmap e62: 3 total, 3 up, 3 in
Nov 24 18:22:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:22:48 compute-0 ceph-mon[74927]: pgmap v149: 321 pgs: 1 peering, 62 unknown, 258 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:22:48 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v150: 321 pgs: 31 unknown, 290 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:22:48 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 5.7 deep-scrub starts
Nov 24 18:22:48 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 5.7 deep-scrub ok
Nov 24 18:22:49 compute-0 ceph-mon[74927]: 5.7 deep-scrub starts
Nov 24 18:22:49 compute-0 ceph-mon[74927]: 5.7 deep-scrub ok
Nov 24 18:22:49 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.12 deep-scrub starts
Nov 24 18:22:49 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.12 deep-scrub ok
Nov 24 18:22:49 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Nov 24 18:22:49 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Nov 24 18:22:50 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Nov 24 18:22:50 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Nov 24 18:22:50 compute-0 ceph-mon[74927]: pgmap v150: 321 pgs: 31 unknown, 290 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:22:50 compute-0 ceph-mon[74927]: 5.4 scrub starts
Nov 24 18:22:50 compute-0 ceph-mon[74927]: 5.4 scrub ok
Nov 24 18:22:50 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v151: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 314 B/s wr, 2 op/s
Nov 24 18:22:50 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 24 18:22:50 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 18:22:50 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 24 18:22:50 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 18:22:50 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Nov 24 18:22:50 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 24 18:22:50 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 24 18:22:50 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 18:22:51 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Nov 24 18:22:51 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 18:22:51 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 18:22:51 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 24 18:22:51 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 18:22:51 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Nov 24 18:22:51 compute-0 ceph-mon[74927]: 4.12 deep-scrub starts
Nov 24 18:22:51 compute-0 ceph-mon[74927]: 4.12 deep-scrub ok
Nov 24 18:22:51 compute-0 ceph-mon[74927]: 6.14 scrub starts
Nov 24 18:22:51 compute-0 ceph-mon[74927]: 6.14 scrub ok
Nov 24 18:22:51 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 18:22:51 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 18:22:51 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 24 18:22:51 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 18:22:51 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.17( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.935743332s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.651275635s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.14( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.868105888s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 146.583679199s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.15( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.871232033s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 146.586914062s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.14( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.867993355s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.583679199s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.17( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.935569763s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.651275635s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.892376900s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 139.608245850s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.15( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.871136665s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.586914062s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.892348289s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.608245850s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.15( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.935056686s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.651275635s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.15( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.935006142s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.651275635s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.899293900s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 139.615631104s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.899258614s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.615631104s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.14( v 62'1 (0'0,62'1] local-lis/les=61/62 n=1 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.942113876s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=62'1 lcod 0'0 mlcod 0'0 active pruub 141.658615112s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.10( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.870314598s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 146.586868286s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.11( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.898689270s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 139.615234375s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.11( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.898637772s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.615234375s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.10( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.870242119s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.586868286s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.14( v 62'1 (0'0,62'1] local-lis/les=61/62 n=1 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.942015648s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=62'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.658615112s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.2( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.941965103s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.658615112s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.1( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.941914558s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.658630371s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.2( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.941915512s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.658615112s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.1( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.941877365s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.658630371s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.2( v 48'4 (0'0,48'4] local-lis/les=57/59 n=1 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.870236397s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 146.587051392s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.2( v 48'4 (0'0,48'4] local-lis/les=57/59 n=1 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.870181084s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.587051392s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.3( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.898333549s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 139.615310669s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.3( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.898296356s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.615310669s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.c( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.869590759s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 146.586975098s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.f( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.941417694s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.658782959s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.d( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.898124695s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 139.615554810s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.c( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.869544029s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.586975098s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.f( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.941282272s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.658782959s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.e( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.941436768s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.659042358s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.e( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.941395760s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.659042358s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.d( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.869237900s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 146.587051392s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.d( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.869210243s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.587051392s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.d( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.898085594s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.615554810s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.d( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.940861702s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.658782959s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.d( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.940839767s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.658782959s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.e( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.868601799s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 146.587005615s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.b( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.940368652s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.658782959s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.898828506s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 139.617248535s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.e( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.868579865s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.587005615s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.b( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.940345764s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.658782959s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.898781776s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.617248535s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.9( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.897096634s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 139.615768433s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.9( v 62'1 (0'0,62'1] local-lis/les=61/62 n=1 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.940112114s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=62'1 lcod 0'0 mlcod 0'0 active pruub 141.658813477s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.9( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.897046089s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.615768433s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.9( v 62'1 (0'0,62'1] local-lis/les=61/62 n=1 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.940054893s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=62'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.658813477s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.b( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.897900581s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 139.616790771s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.f( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.868362427s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 146.587203979s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.8( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.939948082s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.658874512s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.b( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.897863388s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.616790771s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.f( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.868229866s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.587203979s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.8( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.939908981s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.658874512s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.b( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.868102074s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 146.587188721s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.3( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.939609528s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.658874512s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.9( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.867918015s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 146.587265015s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.3( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.939493179s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.658874512s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.9( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.867882729s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.587265015s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.4( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.939405441s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.658905029s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.1( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.897306442s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 139.616775513s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.1( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.897242546s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.616775513s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.4( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.939333916s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.658905029s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.6( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.867695808s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 146.587280273s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.6( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.867674828s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.587280273s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.6( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.939300537s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.658966064s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.897101402s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 139.616744995s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.6( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.939195633s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.658966064s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.896965027s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.616744995s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.b( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.867910385s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.587188721s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.4( v 48'4 (0'0,48'4] local-lis/les=57/59 n=1 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.867398262s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 146.587493896s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.5( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.896954536s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 139.617126465s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.5( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.896933556s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.617126465s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.18( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.938714981s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.658935547s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.4( v 48'4 (0'0,48'4] local-lis/les=57/59 n=1 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.867290497s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.587493896s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.18( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.938659668s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.658935547s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.1b( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.866957664s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 146.587326050s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.1a( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.938565254s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.658950806s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.1b( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.866939545s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.587326050s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.1a( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.938541412s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.658950806s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.896711349s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 139.617385864s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.896691322s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.617385864s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.1c( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.938236237s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.659027100s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.1c( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.938210487s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.659027100s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.1f( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.866488457s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 146.587387085s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.1f( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.866466522s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.587387085s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.18( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.866612434s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 146.587463379s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.896478653s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 139.617431641s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.1e( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.937709808s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.658981323s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.1e( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.937676430s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.658981323s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.18( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.866206169s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.587463379s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.1d( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.866219521s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 146.587600708s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.1d( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.866194725s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.587600708s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.1f( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.937556267s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.658981323s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.1f( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.937532425s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.658981323s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.895994186s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.617431641s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.1d( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.895943642s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 139.617477417s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.1d( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.895924568s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.617477417s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.1c( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.865922928s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 146.587600708s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.10( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.937307358s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.659011841s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.1c( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.865901947s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.587600708s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.10( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.937273026s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.659011841s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.12( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.865839958s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 146.587661743s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.12( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.865820885s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.587661743s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.897483826s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 139.619354248s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.897460938s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.619354248s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.11( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.865662575s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 146.587631226s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.12( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.937047958s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.659011841s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.11( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.865646362s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.587631226s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.19( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.937030792s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.659042358s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.12( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.936997414s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.659011841s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.19( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.937007904s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.659042358s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.1b( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.895446777s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 139.617538452s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.1b( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.895427704s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.617538452s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.11( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.936983109s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.658996582s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.1a( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.865384102s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 146.587631226s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.1a( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.865368843s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.587631226s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.11( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.936758995s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.658996582s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.1b( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.936628342s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.658966064s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.1b( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.936588287s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.658966064s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[11.10( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[8.10( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[9.11( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[9.5( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[8.b( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[11.4( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[9.b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[11.14( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[8.6( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[8.9( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[11.6( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[9.9( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[8.f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[11.e( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[8.e( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[8.c( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[11.f( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[9.d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[9.1( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[11.1( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[9.3( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[9.1d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[8.18( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[11.19( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[9.1b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[8.1a( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[11.17( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[8.14( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[8.1f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[8.1d( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[8.15( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[11.15( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[11.2( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[11.3( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[8.2( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[11.d( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[11.8( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[8.d( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[11.9( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[8.4( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[11.18( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[8.1b( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[11.1b( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[11.1c( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[11.1e( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[8.12( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[11.11( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[8.11( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[11.12( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[11.b( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[11.1a( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[11.1f( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[8.1c( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.d( v 62'23 (0'0,62'23] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.849390030s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=62'23 lcod 61'22 mlcod 61'22 active pruub 132.094467163s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.1e( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.849451065s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 132.094573975s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.13( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.849112511s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 132.094223022s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.d( v 62'23 (0'0,62'23] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.849340439s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=62'23 lcod 61'22 mlcod 0'0 unknown NOTIFY pruub 132.094467163s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[10.d( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[10.b( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[10.12( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.b( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848711014s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 132.093948364s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.b( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848665237s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.093948364s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[10.1e( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.12( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848638535s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 132.094146729s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.13( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848741531s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.094223022s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.1e( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.849034309s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.094573975s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.12( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848603249s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.094146729s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.10( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848593712s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 132.094329834s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[10.7( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.10( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848567963s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.094329834s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.1a( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848437309s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 132.094375610s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.1a( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848406792s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.094375610s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.19( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848434448s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 132.094421387s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.19( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848391533s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.094421387s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.7( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848324776s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 132.094436646s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.7( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848262787s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.094436646s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.6( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848258972s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 132.094467163s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.6( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848228455s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.094467163s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.4( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848482132s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 132.094818115s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.8( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848127365s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 132.094497681s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.4( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848447800s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.094818115s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.f( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848217010s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 132.094619751s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.8( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848087311s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.094497681s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.f( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848190308s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.094619751s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.9( v 62'23 (0'0,62'23] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848142624s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=62'23 lcod 61'22 mlcod 61'22 active pruub 132.094635010s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.9( v 62'23 (0'0,62'23] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848088264s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=62'23 lcod 61'22 mlcod 0'0 unknown NOTIFY pruub 132.094635010s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.11( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.847746849s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 132.094268799s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.e( v 62'23 (0'0,62'23] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848030090s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=62'23 lcod 61'22 mlcod 61'22 active pruub 132.094711304s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[10.4( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.1( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.847915649s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 132.094696045s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.e( v 62'23 (0'0,62'23] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.847978592s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=62'23 lcod 61'22 mlcod 0'0 unknown NOTIFY pruub 132.094711304s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.2( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848220825s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 132.095169067s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.2( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848194122s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.095169067s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.1( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.847879410s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.094696045s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.14( v 62'23 (0'0,62'23] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.847599030s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=62'23 lcod 61'22 mlcod 61'22 active pruub 132.094726562s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.14( v 62'23 (0'0,62'23] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.847535133s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=62'23 lcod 61'22 mlcod 0'0 unknown NOTIFY pruub 132.094726562s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.15( v 62'23 (0'0,62'23] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.847527504s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=62'23 lcod 61'22 mlcod 61'22 active pruub 132.094726562s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[10.8( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.15( v 62'23 (0'0,62'23] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.847389221s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=62'23 lcod 61'22 mlcod 0'0 unknown NOTIFY pruub 132.094726562s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.16( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.847386360s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 132.094787598s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.16( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.847353935s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.094787598s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.17( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.847392082s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 132.094818115s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.17( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.847299576s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.094818115s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[10.9( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.11( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.847588539s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.094268799s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[10.13( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[10.e( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[10.1( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[10.10( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[10.1a( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[10.15( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[10.19( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[10.16( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[10.6( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[10.f( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[10.17( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[10.2( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[10.14( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:51 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[10.11( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Nov 24 18:22:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Nov 24 18:22:52 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Nov 24 18:22:52 compute-0 ceph-mon[74927]: pgmap v151: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 314 B/s wr, 2 op/s
Nov 24 18:22:52 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 18:22:52 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 18:22:52 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 24 18:22:52 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 18:22:52 compute-0 ceph-mon[74927]: osdmap e63: 3 total, 3 up, 3 in
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.1b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.1b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.3( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.3( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.1d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.1( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.1( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.1d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.9( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.9( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.5( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.11( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.5( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.11( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.1b( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.1b( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.1d( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.1d( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.5( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.5( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.1( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.1( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.b( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.b( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.9( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.9( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.d( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.d( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.3( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.3( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.11( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.11( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:52 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[8.1c( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[10.13( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[10.8( v 52'16 (0'0,52'16] local-lis/les=63/64 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[11.1f( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[8.11( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[11.12( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[11.b( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[11.11( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[11.1a( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[11.1e( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[11.1c( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[8.12( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[11.18( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[11.1b( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[8.4( v 48'4 (0'0,48'4] local-lis/les=63/64 n=1 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[8.1b( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[8.d( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[11.9( v 62'1 lc 0'0 (0'0,62'1] local-lis/les=63/64 n=1 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=62'1 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[11.d( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[8.2( v 48'4 (0'0,48'4] local-lis/les=63/64 n=1 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[11.3( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[11.8( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[8.15( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=48'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[11.2( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[11.15( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[10.10( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[10.11( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[10.6( v 52'16 (0'0,52'16] local-lis/les=63/64 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[10.1a( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[10.19( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[10.b( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[10.f( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[10.9( v 62'23 lc 61'22 (0'0,62'23] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=62'23 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[10.17( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[10.7( v 52'16 (0'0,52'16] local-lis/les=63/64 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[10.4( v 52'16 (0'0,52'16] local-lis/les=63/64 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[10.16( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[10.1e( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[8.1d( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[8.1f( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[10.1( v 52'16 (0'0,52'16] local-lis/les=63/64 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[10.15( v 62'23 lc 61'22 (0'0,62'23] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=62'23 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[10.e( v 62'23 lc 61'22 (0'0,62'23] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=62'23 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[11.17( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[8.14( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[10.d( v 62'23 lc 61'22 (0'0,62'23] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=62'23 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[8.1a( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[8.18( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[11.19( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[11.1( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[11.f( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[8.c( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[8.e( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[11.e( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[11.6( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[8.6( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[11.14( v 62'1 lc 0'0 (0'0,62'1] local-lis/les=63/64 n=1 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=62'1 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[8.9( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[8.b( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[8.10( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[11.4( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[8.f( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[11.10( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[10.12( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[10.2( v 52'16 (0'0,52'16] local-lis/les=63/64 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[10.14( v 62'23 lc 61'22 (0'0,62'23] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=62'23 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:22:52 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v154: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 316 B/s wr, 2 op/s
Nov 24 18:22:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Nov 24 18:22:52 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 24 18:22:53 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Nov 24 18:22:53 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 24 18:22:53 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Nov 24 18:22:53 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Nov 24 18:22:53 compute-0 ceph-mon[74927]: osdmap e64: 3 total, 3 up, 3 in
Nov 24 18:22:53 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 24 18:22:53 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Nov 24 18:22:53 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Nov 24 18:22:53 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 65 pg[9.11( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:53 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 65 pg[9.b( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:53 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 65 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:53 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 65 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:53 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 65 pg[9.d( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:53 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 65 pg[9.1d( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:53 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 65 pg[9.5( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:53 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 65 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:53 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 65 pg[9.9( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:53 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 65 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:53 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 65 pg[9.1b( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:53 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 65 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:53 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 65 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:53 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 65 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:53 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 65 pg[9.1( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:53 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 65 pg[9.3( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:54 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 6.15 deep-scrub starts
Nov 24 18:22:54 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 6.15 deep-scrub ok
Nov 24 18:22:54 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Nov 24 18:22:54 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Nov 24 18:22:54 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Nov 24 18:22:54 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 66 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:54 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 66 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:54 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 66 pg[9.1d( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:54 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 66 pg[9.1d( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:54 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 66 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:54 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 66 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:54 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 66 pg[9.d( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:54 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 66 pg[9.d( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:54 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 66 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:54 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 66 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:54 compute-0 ceph-mon[74927]: pgmap v154: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 316 B/s wr, 2 op/s
Nov 24 18:22:54 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 66 pg[9.b( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:54 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 24 18:22:54 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 66 pg[9.b( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:54 compute-0 ceph-mon[74927]: osdmap e65: 3 total, 3 up, 3 in
Nov 24 18:22:54 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 66 pg[9.5( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:54 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 66 pg[9.5( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:54 compute-0 ceph-mon[74927]: 2.1d scrub starts
Nov 24 18:22:54 compute-0 ceph-mon[74927]: 2.1d scrub ok
Nov 24 18:22:54 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 66 pg[9.11( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:54 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 66 pg[9.11( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 66 pg[9.1d( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=15.580228806s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 149.361923218s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 66 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=15.580214500s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 149.361984253s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 66 pg[9.1d( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=15.580147743s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.361923218s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 66 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=15.580163956s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.361984253s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 66 pg[9.5( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=15.580101967s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 149.361953735s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 66 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=15.579842567s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 149.361831665s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 66 pg[9.b( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=15.579828262s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 149.361740112s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 66 pg[9.5( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=15.579917908s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.361953735s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 66 pg[9.b( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=15.579610825s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.361740112s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 66 pg[9.11( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=15.579423904s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 149.361724854s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 66 pg[9.11( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=15.579375267s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.361724854s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 66 pg[9.d( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=15.579146385s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 149.361892700s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 66 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=15.578907013s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 149.361801147s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 66 pg[9.d( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=15.578978539s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.361892700s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 66 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=15.578929901s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.361831665s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 66 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=15.578817368s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.361801147s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:54 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v157: 321 pgs: 5 active+remapped, 2 active+recovery_wait+remapped, 1 active+recovering+remapped, 313 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 20/215 objects misplaced (9.302%); 660 B/s, 8 objects/s recovering
Nov 24 18:22:55 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Nov 24 18:22:55 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Nov 24 18:22:55 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Nov 24 18:22:55 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Nov 24 18:22:55 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Nov 24 18:22:55 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 67 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67 pruub=14.576583862s) [0] async=[0] r=-1 lpr=67 pi=[59,67)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 149.362167358s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:55 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 67 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67 pruub=14.576522827s) [0] r=-1 lpr=67 pi=[59,67)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.362167358s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:55 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 67 pg[9.3( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67 pruub=14.575687408s) [0] async=[0] r=-1 lpr=67 pi=[59,67)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 149.361968994s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:55 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 67 pg[9.3( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67 pruub=14.575637817s) [0] r=-1 lpr=67 pi=[59,67)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.361968994s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:55 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:55 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:55 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.1b( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:55 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.1b( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:55 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:55 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:55 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.3( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:55 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.3( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:55 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.1( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:55 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.1( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:55 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.9( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:55 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.9( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:55 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:55 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:55 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:55 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:22:55 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 67 pg[9.9( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67 pruub=14.573685646s) [0] async=[0] r=-1 lpr=67 pi=[59,67)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 149.361999512s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:55 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 67 pg[9.9( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67 pruub=14.573624611s) [0] r=-1 lpr=67 pi=[59,67)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.361999512s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:55 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 67 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67 pruub=14.572578430s) [0] async=[0] r=-1 lpr=67 pi=[59,67)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 149.362060547s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:55 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 67 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67 pruub=14.572525978s) [0] r=-1 lpr=67 pi=[59,67)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.362060547s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:55 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 67 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67 pruub=14.572337151s) [0] async=[0] r=-1 lpr=67 pi=[59,67)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 149.362136841s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:55 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 67 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67 pruub=14.572283745s) [0] r=-1 lpr=67 pi=[59,67)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.362136841s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:55 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 67 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67 pruub=14.571805000s) [0] async=[0] r=-1 lpr=67 pi=[59,67)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 149.361770630s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:55 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 67 pg[9.1b( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67 pruub=14.572100639s) [0] async=[0] r=-1 lpr=67 pi=[59,67)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 149.362136841s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:55 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 67 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67 pruub=14.571748734s) [0] r=-1 lpr=67 pi=[59,67)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.361770630s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:55 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 67 pg[9.1b( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67 pruub=14.572053909s) [0] r=-1 lpr=67 pi=[59,67)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.362136841s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:55 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 67 pg[9.1( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67 pruub=14.571900368s) [0] async=[0] r=-1 lpr=67 pi=[59,67)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 149.362289429s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:22:55 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 67 pg[9.1( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67 pruub=14.571837425s) [0] r=-1 lpr=67 pi=[59,67)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.362289429s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:22:55 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.11( v 55'385 (0'0,55'385] local-lis/les=66/67 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:55 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=66/67 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:55 compute-0 ceph-mon[74927]: 6.15 deep-scrub starts
Nov 24 18:22:55 compute-0 ceph-mon[74927]: 6.15 deep-scrub ok
Nov 24 18:22:55 compute-0 ceph-mon[74927]: osdmap e66: 3 total, 3 up, 3 in
Nov 24 18:22:55 compute-0 ceph-mon[74927]: pgmap v157: 321 pgs: 5 active+remapped, 2 active+recovery_wait+remapped, 1 active+recovering+remapped, 313 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 20/215 objects misplaced (9.302%); 660 B/s, 8 objects/s recovering
Nov 24 18:22:55 compute-0 ceph-mon[74927]: osdmap e67: 3 total, 3 up, 3 in
Nov 24 18:22:55 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=66/67 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:55 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.d( v 55'385 (0'0,55'385] local-lis/les=66/67 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:55 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=66/67 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:55 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.1d( v 55'385 (0'0,55'385] local-lis/les=66/67 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:55 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.b( v 55'385 (0'0,55'385] local-lis/les=66/67 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:55 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.5( v 55'385 (0'0,55'385] local-lis/les=66/67 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:56 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 6.11 deep-scrub starts
Nov 24 18:22:56 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 6.11 deep-scrub ok
Nov 24 18:22:56 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Nov 24 18:22:56 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Nov 24 18:22:56 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Nov 24 18:22:56 compute-0 ceph-mon[74927]: 4.11 scrub starts
Nov 24 18:22:56 compute-0 ceph-mon[74927]: 4.11 scrub ok
Nov 24 18:22:56 compute-0 ceph-mon[74927]: osdmap e68: 3 total, 3 up, 3 in
Nov 24 18:22:56 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 68 pg[9.1b( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:56 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 68 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:56 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 68 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:56 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 68 pg[9.1( v 55'385 (0'0,55'385] local-lis/les=67/68 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:56 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 68 pg[9.9( v 55'385 (0'0,55'385] local-lis/les=67/68 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:56 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 68 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=67/68 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:56 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 68 pg[9.3( v 55'385 (0'0,55'385] local-lis/les=67/68 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:56 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 68 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:22:56 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v160: 321 pgs: 5 active+remapped, 2 active+recovery_wait+remapped, 1 active+recovering+remapped, 313 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 20/215 objects misplaced (9.302%); 660 B/s, 8 objects/s recovering
Nov 24 18:22:56 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.d scrub starts
Nov 24 18:22:56 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.d scrub ok
Nov 24 18:22:57 compute-0 ceph-mon[74927]: 6.11 deep-scrub starts
Nov 24 18:22:57 compute-0 ceph-mon[74927]: 6.11 deep-scrub ok
Nov 24 18:22:57 compute-0 ceph-mon[74927]: pgmap v160: 321 pgs: 5 active+remapped, 2 active+recovery_wait+remapped, 1 active+recovering+remapped, 313 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 20/215 objects misplaced (9.302%); 660 B/s, 8 objects/s recovering
Nov 24 18:22:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:22:58 compute-0 ceph-mon[74927]: 6.d scrub starts
Nov 24 18:22:58 compute-0 ceph-mon[74927]: 6.d scrub ok
Nov 24 18:22:58 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v161: 321 pgs: 5 active+remapped, 2 active+recovery_wait+remapped, 1 active+recovering+remapped, 313 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 20/215 objects misplaced (9.302%); 492 B/s, 6 objects/s recovering
Nov 24 18:22:58 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.f scrub starts
Nov 24 18:22:58 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.f scrub ok
Nov 24 18:22:59 compute-0 ceph-mon[74927]: pgmap v161: 321 pgs: 5 active+remapped, 2 active+recovery_wait+remapped, 1 active+recovering+remapped, 313 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 20/215 objects misplaced (9.302%); 492 B/s, 6 objects/s recovering
Nov 24 18:23:00 compute-0 sshd-session[105009]: Accepted publickey for zuul from 192.168.122.30 port 58384 ssh2: ECDSA SHA256:jrbRhTO0vQ5011EeK1ZbrK2vK+fcQyIDL9kUBqDHBYY
Nov 24 18:23:00 compute-0 systemd-logind[822]: New session 33 of user zuul.
Nov 24 18:23:00 compute-0 systemd[1]: Started Session 33 of User zuul.
Nov 24 18:23:00 compute-0 sshd-session[105009]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 18:23:00 compute-0 ceph-mon[74927]: 4.f scrub starts
Nov 24 18:23:00 compute-0 ceph-mon[74927]: 4.f scrub ok
Nov 24 18:23:00 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v162: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 353 B/s, 13 objects/s recovering
Nov 24 18:23:00 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Nov 24 18:23:00 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 24 18:23:00 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Nov 24 18:23:00 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Nov 24 18:23:01 compute-0 python3.9[105162]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:23:01 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Nov 24 18:23:01 compute-0 ceph-mon[74927]: pgmap v162: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 353 B/s, 13 objects/s recovering
Nov 24 18:23:01 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 24 18:23:01 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 24 18:23:01 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Nov 24 18:23:01 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Nov 24 18:23:02 compute-0 sudo[105378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezjjbwzoptrdwndtuwsfxfdimkehlacb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008581.9760213-32-168019118281806/AnsiballZ_command.py'
Nov 24 18:23:02 compute-0 sudo[105378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:23:02 compute-0 ceph-mon[74927]: 4.10 scrub starts
Nov 24 18:23:02 compute-0 ceph-mon[74927]: 4.10 scrub ok
Nov 24 18:23:02 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 24 18:23:02 compute-0 ceph-mon[74927]: osdmap e69: 3 total, 3 up, 3 in
Nov 24 18:23:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:23:02 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v164: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 185 B/s, 8 objects/s recovering
Nov 24 18:23:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Nov 24 18:23:02 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 24 18:23:02 compute-0 python3.9[105380]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                             pushd /var/tmp
                                             curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                             pushd repo-setup-main
                                             python3 -m venv ./venv
                                             PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                             ./venv/bin/repo-setup current-podified -b antelope
                                             popd
                                             rm -rf repo-setup-main
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:23:02 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Nov 24 18:23:02 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Nov 24 18:23:03 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Nov 24 18:23:03 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Nov 24 18:23:03 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Nov 24 18:23:03 compute-0 ceph-mon[74927]: pgmap v164: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 185 B/s, 8 objects/s recovering
Nov 24 18:23:03 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 24 18:23:03 compute-0 ceph-mon[74927]: 2.1c scrub starts
Nov 24 18:23:03 compute-0 ceph-mon[74927]: 2.1c scrub ok
Nov 24 18:23:03 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 24 18:23:03 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Nov 24 18:23:03 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Nov 24 18:23:04 compute-0 ceph-mon[74927]: 6.13 scrub starts
Nov 24 18:23:04 compute-0 ceph-mon[74927]: 6.13 scrub ok
Nov 24 18:23:04 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 24 18:23:04 compute-0 ceph-mon[74927]: osdmap e70: 3 total, 3 up, 3 in
Nov 24 18:23:04 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v166: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 170 B/s, 7 objects/s recovering
Nov 24 18:23:04 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Nov 24 18:23:04 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 24 18:23:04 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.c scrub starts
Nov 24 18:23:04 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.c scrub ok
Nov 24 18:23:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:23:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:23:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:23:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:23:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:23:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:23:04 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.f scrub starts
Nov 24 18:23:04 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.f scrub ok
Nov 24 18:23:05 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Nov 24 18:23:05 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Nov 24 18:23:05 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Nov 24 18:23:05 compute-0 ceph-mon[74927]: pgmap v166: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 170 B/s, 7 objects/s recovering
Nov 24 18:23:05 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 24 18:23:05 compute-0 ceph-mon[74927]: 2.f scrub starts
Nov 24 18:23:05 compute-0 ceph-mon[74927]: 2.f scrub ok
Nov 24 18:23:05 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 24 18:23:05 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Nov 24 18:23:05 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Nov 24 18:23:05 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.e scrub starts
Nov 24 18:23:05 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.e scrub ok
Nov 24 18:23:06 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 6.f scrub starts
Nov 24 18:23:06 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 6.f scrub ok
Nov 24 18:23:06 compute-0 ceph-mon[74927]: 6.c scrub starts
Nov 24 18:23:06 compute-0 ceph-mon[74927]: 6.c scrub ok
Nov 24 18:23:06 compute-0 ceph-mon[74927]: 4.13 scrub starts
Nov 24 18:23:06 compute-0 ceph-mon[74927]: 4.13 scrub ok
Nov 24 18:23:06 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 24 18:23:06 compute-0 ceph-mon[74927]: osdmap e71: 3 total, 3 up, 3 in
Nov 24 18:23:06 compute-0 ceph-mon[74927]: 6.f scrub starts
Nov 24 18:23:06 compute-0 ceph-mon[74927]: 6.f scrub ok
Nov 24 18:23:06 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v168: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:23:06 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Nov 24 18:23:06 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 24 18:23:06 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Nov 24 18:23:06 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Nov 24 18:23:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Nov 24 18:23:07 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 24 18:23:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Nov 24 18:23:07 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Nov 24 18:23:07 compute-0 ceph-mon[74927]: 6.e scrub starts
Nov 24 18:23:07 compute-0 ceph-mon[74927]: 6.e scrub ok
Nov 24 18:23:07 compute-0 ceph-mon[74927]: pgmap v168: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:23:07 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 24 18:23:07 compute-0 ceph-mon[74927]: 2.2 scrub starts
Nov 24 18:23:07 compute-0 ceph-mon[74927]: 2.2 scrub ok
Nov 24 18:23:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:23:07 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Nov 24 18:23:07 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Nov 24 18:23:08 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 24 18:23:08 compute-0 ceph-mon[74927]: osdmap e72: 3 total, 3 up, 3 in
Nov 24 18:23:08 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 72 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72 pruub=15.536156654s) [2] r=-1 lpr=72 pi=[59,72)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 163.615783691s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:08 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 72 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72 pruub=15.536096573s) [2] r=-1 lpr=72 pi=[59,72)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.615783691s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:08 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 72 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72 pruub=15.536886215s) [2] r=-1 lpr=72 pi=[59,72)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 163.617172241s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:08 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 72 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72 pruub=15.536849022s) [2] r=-1 lpr=72 pi=[59,72)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.617172241s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:08 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 72 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72 pruub=15.537218094s) [2] r=-1 lpr=72 pi=[59,72)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 163.617813110s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:08 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 72 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72 pruub=15.537192345s) [2] r=-1 lpr=72 pi=[59,72)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.617813110s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:08 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 72 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72 pruub=15.536213875s) [2] r=-1 lpr=72 pi=[59,72)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 163.617156982s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:08 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 72 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72 pruub=15.536179543s) [2] r=-1 lpr=72 pi=[59,72)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.617156982s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:08 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 72 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72) [2] r=0 lpr=72 pi=[59,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:08 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 72 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72) [2] r=0 lpr=72 pi=[59,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:08 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 72 pg[9.6( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72) [2] r=0 lpr=72 pi=[59,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:08 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 72 pg[9.e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72) [2] r=0 lpr=72 pi=[59,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:08 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v170: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:23:08 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Nov 24 18:23:08 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 24 18:23:08 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.d scrub starts
Nov 24 18:23:08 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.d scrub ok
Nov 24 18:23:08 compute-0 sudo[105410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:23:08 compute-0 sudo[105410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:23:08 compute-0 sudo[105410]: pam_unix(sudo:session): session closed for user root
Nov 24 18:23:08 compute-0 sudo[105435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:23:08 compute-0 sudo[105435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:23:08 compute-0 sudo[105435]: pam_unix(sudo:session): session closed for user root
Nov 24 18:23:08 compute-0 sudo[105460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:23:08 compute-0 sudo[105460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:23:08 compute-0 sudo[105460]: pam_unix(sudo:session): session closed for user root
Nov 24 18:23:09 compute-0 sudo[105485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 18:23:09 compute-0 sudo[105485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:23:09 compute-0 sudo[105485]: pam_unix(sudo:session): session closed for user root
Nov 24 18:23:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Nov 24 18:23:09 compute-0 ceph-mon[74927]: 6.2 scrub starts
Nov 24 18:23:09 compute-0 ceph-mon[74927]: 6.2 scrub ok
Nov 24 18:23:09 compute-0 ceph-mon[74927]: pgmap v170: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:23:09 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 24 18:23:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:23:09 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:23:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:23:09 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:23:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:23:09 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 24 18:23:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Nov 24 18:23:09 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Nov 24 18:23:09 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 73 pg[9.e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[59,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:09 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 73 pg[9.e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[59,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:09 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 73 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[59,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:09 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 73 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[59,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:09 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 73 pg[9.6( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[59,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:09 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 73 pg[9.6( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[59,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:09 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 73 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[59,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:09 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 73 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[59,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:09 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:23:09 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 974ffd96-c3eb-4cd8-8569-86d4a7a02be5 does not exist
Nov 24 18:23:09 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 33a606b8-2bdc-4762-97dd-08a92f9a95c6 does not exist
Nov 24 18:23:09 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 2202143b-209b-44f3-ad13-1b97ca2be463 does not exist
Nov 24 18:23:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 18:23:09 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:23:09 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 73 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=0 lpr=73 pi=[59,73)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:09 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 73 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=0 lpr=73 pi=[59,73)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:09 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 73 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=0 lpr=73 pi=[59,73)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:09 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 73 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=0 lpr=73 pi=[59,73)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:09 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 73 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=0 lpr=73 pi=[59,73)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:09 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 73 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=0 lpr=73 pi=[59,73)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:09 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 73 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=0 lpr=73 pi=[59,73)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:09 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 73 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=0 lpr=73 pi=[59,73)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 18:23:09 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:23:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:23:09 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:23:09 compute-0 sudo[105545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:23:09 compute-0 sudo[105545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:23:09 compute-0 sudo[105545]: pam_unix(sudo:session): session closed for user root
Nov 24 18:23:09 compute-0 sudo[105570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:23:09 compute-0 sudo[105570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:23:09 compute-0 sudo[105570]: pam_unix(sudo:session): session closed for user root
Nov 24 18:23:09 compute-0 sudo[105378]: pam_unix(sudo:session): session closed for user root
Nov 24 18:23:09 compute-0 sudo[105595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:23:09 compute-0 sudo[105595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:23:09 compute-0 sudo[105595]: pam_unix(sudo:session): session closed for user root
Nov 24 18:23:09 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 73 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=66/67 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=73 pruub=9.535032272s) [2] r=-1 lpr=73 pi=[66,73)/1 crt=55'385 mlcod 0'0 active pruub 165.258880615s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:09 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 73 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=66/67 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=73 pruub=9.534981728s) [2] r=-1 lpr=73 pi=[66,73)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 165.258880615s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:09 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 73 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=66/67 n=5 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=73 pruub=9.528047562s) [2] r=-1 lpr=73 pi=[66,73)/1 crt=55'385 mlcod 0'0 active pruub 165.251983643s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:09 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 73 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=66/67 n=5 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=73 pruub=9.527892113s) [2] r=-1 lpr=73 pi=[66,73)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 165.251983643s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:09 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 73 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=67/68 n=6 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=73 pruub=10.544498444s) [2] r=-1 lpr=73 pi=[67,73)/1 crt=55'385 mlcod 0'0 active pruub 166.268676758s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:09 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 73 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=67/68 n=6 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=73 pruub=10.544478416s) [2] r=-1 lpr=73 pi=[67,73)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 166.268676758s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:09 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 73 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=66/67 n=5 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=73 pruub=9.534360886s) [2] r=-1 lpr=73 pi=[66,73)/1 crt=55'385 mlcod 0'0 active pruub 165.258819580s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:09 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 73 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=66/67 n=5 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=73 pruub=9.534062386s) [2] r=-1 lpr=73 pi=[66,73)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 165.258819580s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:09 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 73 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=73) [2] r=0 lpr=73 pi=[66,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:09 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 73 pg[9.7( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=73) [2] r=0 lpr=73 pi=[67,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:09 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 73 pg[9.17( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=73) [2] r=0 lpr=73 pi=[66,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:09 compute-0 sudo[105622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 18:23:09 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 73 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=73) [2] r=0 lpr=73 pi=[66,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:09 compute-0 sudo[105622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:23:10 compute-0 podman[105708]: 2025-11-24 18:23:10.020918845 +0000 UTC m=+0.042679924 container create 0e180a0e3241d45881739baaf2f5c62b460466563ea2dc18c751343d80c20eb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 18:23:10 compute-0 sshd-session[105012]: Connection closed by 192.168.122.30 port 58384
Nov 24 18:23:10 compute-0 sshd-session[105009]: pam_unix(sshd:session): session closed for user zuul
Nov 24 18:23:10 compute-0 systemd[1]: session-33.scope: Deactivated successfully.
Nov 24 18:23:10 compute-0 systemd[1]: session-33.scope: Consumed 8.306s CPU time.
Nov 24 18:23:10 compute-0 systemd-logind[822]: Session 33 logged out. Waiting for processes to exit.
Nov 24 18:23:10 compute-0 systemd-logind[822]: Removed session 33.
Nov 24 18:23:10 compute-0 systemd[1]: Started libpod-conmon-0e180a0e3241d45881739baaf2f5c62b460466563ea2dc18c751343d80c20eb7.scope.
Nov 24 18:23:10 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:23:10 compute-0 podman[105708]: 2025-11-24 18:23:09.999421884 +0000 UTC m=+0.021183013 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:23:10 compute-0 podman[105708]: 2025-11-24 18:23:10.101922597 +0000 UTC m=+0.123683676 container init 0e180a0e3241d45881739baaf2f5c62b460466563ea2dc18c751343d80c20eb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:23:10 compute-0 podman[105708]: 2025-11-24 18:23:10.10764365 +0000 UTC m=+0.129404739 container start 0e180a0e3241d45881739baaf2f5c62b460466563ea2dc18c751343d80c20eb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hamilton, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:23:10 compute-0 podman[105708]: 2025-11-24 18:23:10.111306204 +0000 UTC m=+0.133067283 container attach 0e180a0e3241d45881739baaf2f5c62b460466563ea2dc18c751343d80c20eb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hamilton, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 24 18:23:10 compute-0 cool_hamilton[105724]: 167 167
Nov 24 18:23:10 compute-0 systemd[1]: libpod-0e180a0e3241d45881739baaf2f5c62b460466563ea2dc18c751343d80c20eb7.scope: Deactivated successfully.
Nov 24 18:23:10 compute-0 conmon[105724]: conmon 0e180a0e3241d4588173 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0e180a0e3241d45881739baaf2f5c62b460466563ea2dc18c751343d80c20eb7.scope/container/memory.events
Nov 24 18:23:10 compute-0 podman[105708]: 2025-11-24 18:23:10.113400143 +0000 UTC m=+0.135161262 container died 0e180a0e3241d45881739baaf2f5c62b460466563ea2dc18c751343d80c20eb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hamilton, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 18:23:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-55239eee08468c653f8eb9257bb421397ad86f3384ec3b6aa6afe50ad13a0912-merged.mount: Deactivated successfully.
Nov 24 18:23:10 compute-0 podman[105708]: 2025-11-24 18:23:10.157164757 +0000 UTC m=+0.178925856 container remove 0e180a0e3241d45881739baaf2f5c62b460466563ea2dc18c751343d80c20eb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hamilton, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 18:23:10 compute-0 systemd[1]: libpod-conmon-0e180a0e3241d45881739baaf2f5c62b460466563ea2dc18c751343d80c20eb7.scope: Deactivated successfully.
Nov 24 18:23:10 compute-0 podman[105748]: 2025-11-24 18:23:10.348817675 +0000 UTC m=+0.050887298 container create 53cd5049445d7396daeb9157508ead3198e855177edf74e03112e9b6dd50cb75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sanderson, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 24 18:23:10 compute-0 systemd[1]: Started libpod-conmon-53cd5049445d7396daeb9157508ead3198e855177edf74e03112e9b6dd50cb75.scope.
Nov 24 18:23:10 compute-0 podman[105748]: 2025-11-24 18:23:10.319058329 +0000 UTC m=+0.021127952 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:23:10 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:23:10 compute-0 ceph-mon[74927]: 4.d scrub starts
Nov 24 18:23:10 compute-0 ceph-mon[74927]: 4.d scrub ok
Nov 24 18:23:10 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:23:10 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:23:10 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 24 18:23:10 compute-0 ceph-mon[74927]: osdmap e73: 3 total, 3 up, 3 in
Nov 24 18:23:10 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:23:10 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:23:10 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:23:10 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:23:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfcbefd78237e9028b78a198f93e1886aa87c6c1810b6ffc8f4f2e2d91d3b92a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:23:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfcbefd78237e9028b78a198f93e1886aa87c6c1810b6ffc8f4f2e2d91d3b92a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:23:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfcbefd78237e9028b78a198f93e1886aa87c6c1810b6ffc8f4f2e2d91d3b92a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:23:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfcbefd78237e9028b78a198f93e1886aa87c6c1810b6ffc8f4f2e2d91d3b92a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:23:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfcbefd78237e9028b78a198f93e1886aa87c6c1810b6ffc8f4f2e2d91d3b92a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:23:10 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Nov 24 18:23:10 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Nov 24 18:23:10 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Nov 24 18:23:10 compute-0 podman[105748]: 2025-11-24 18:23:10.465438199 +0000 UTC m=+0.167507812 container init 53cd5049445d7396daeb9157508ead3198e855177edf74e03112e9b6dd50cb75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sanderson, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:23:10 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 74 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=66/67 n=5 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74) [2]/[0] r=0 lpr=74 pi=[66,74)/1 crt=55'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:10 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 74 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=66/67 n=5 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74) [2]/[0] r=0 lpr=74 pi=[66,74)/1 crt=55'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:10 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 74 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=67/68 n=6 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=74) [2]/[0] r=0 lpr=74 pi=[67,74)/1 crt=55'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:10 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 74 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=67/68 n=6 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=74) [2]/[0] r=0 lpr=74 pi=[67,74)/1 crt=55'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:10 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 74 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=66/67 n=5 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74) [2]/[0] r=0 lpr=74 pi=[66,74)/1 crt=55'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:10 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 74 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=66/67 n=5 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74) [2]/[0] r=0 lpr=74 pi=[66,74)/1 crt=55'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:10 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 74 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=66/67 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74) [2]/[0] r=0 lpr=74 pi=[66,74)/1 crt=55'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:10 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 74 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=66/67 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74) [2]/[0] r=0 lpr=74 pi=[66,74)/1 crt=55'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:10 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 74 pg[9.17( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74) [2]/[0] r=-1 lpr=74 pi=[66,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:10 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 74 pg[9.17( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74) [2]/[0] r=-1 lpr=74 pi=[66,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:10 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 74 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74) [2]/[0] r=-1 lpr=74 pi=[66,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:10 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 74 pg[9.7( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=74) [2]/[0] r=-1 lpr=74 pi=[67,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:10 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 74 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74) [2]/[0] r=-1 lpr=74 pi=[66,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:10 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 74 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74) [2]/[0] r=-1 lpr=74 pi=[66,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:10 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 74 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74) [2]/[0] r=-1 lpr=74 pi=[66,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:10 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 74 pg[9.7( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=74) [2]/[0] r=-1 lpr=74 pi=[67,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:10 compute-0 podman[105748]: 2025-11-24 18:23:10.476994788 +0000 UTC m=+0.179064381 container start 53cd5049445d7396daeb9157508ead3198e855177edf74e03112e9b6dd50cb75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sanderson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 18:23:10 compute-0 podman[105748]: 2025-11-24 18:23:10.483914944 +0000 UTC m=+0.185984557 container attach 53cd5049445d7396daeb9157508ead3198e855177edf74e03112e9b6dd50cb75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sanderson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 24 18:23:10 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 74 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[59,73)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:23:10 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 74 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[59,73)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:23:10 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 74 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[59,73)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:23:10 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 74 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[59,73)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:23:10 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v173: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:23:10 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Nov 24 18:23:10 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 24 18:23:10 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Nov 24 18:23:10 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Nov 24 18:23:11 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.a scrub starts
Nov 24 18:23:11 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.a scrub ok
Nov 24 18:23:11 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Nov 24 18:23:11 compute-0 ceph-mon[74927]: osdmap e74: 3 total, 3 up, 3 in
Nov 24 18:23:11 compute-0 ceph-mon[74927]: pgmap v173: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:23:11 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 24 18:23:11 compute-0 ceph-mon[74927]: 5.5 scrub starts
Nov 24 18:23:11 compute-0 ceph-mon[74927]: 5.5 scrub ok
Nov 24 18:23:11 compute-0 ceph-mon[74927]: 4.a scrub starts
Nov 24 18:23:11 compute-0 ceph-mon[74927]: 4.a scrub ok
Nov 24 18:23:11 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 24 18:23:11 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Nov 24 18:23:11 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Nov 24 18:23:11 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 75 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:11 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 75 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:11 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 75 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:11 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 75 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:11 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 75 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:11 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 75 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:11 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 75 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:11 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 75 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:11 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.994512558s) [2] async=[2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 166.084869385s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:11 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.994291306s) [2] async=[2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 166.084701538s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:11 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.994224548s) [2] async=[2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 166.084793091s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:11 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.994330406s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084869385s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:11 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.994112968s) [2] async=[2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 166.084762573s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:11 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.994158745s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084793091s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:11 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.994071960s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084762573s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:11 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.993534088s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084701538s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:11 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=12.525858879s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 163.617202759s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:11 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=12.525828362s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.617202759s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:11 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 75 pg[9.8( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:11 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=12.525444031s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 163.617691040s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:11 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=12.525407791s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.617691040s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:11 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 75 pg[9.18( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:11 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 75 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=74/75 n=6 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=74) [2]/[0] async=[2] r=0 lpr=74 pi=[67,74)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:23:11 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 75 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=74/75 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74) [2]/[0] async=[2] r=0 lpr=74 pi=[66,74)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:23:11 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 75 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=74/75 n=5 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74) [2]/[0] async=[2] r=0 lpr=74 pi=[66,74)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:23:11 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 75 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=74/75 n=5 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74) [2]/[0] async=[2] r=0 lpr=74 pi=[66,74)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:23:11 compute-0 gallant_sanderson[105764]: --> passed data devices: 0 physical, 3 LVM
Nov 24 18:23:11 compute-0 gallant_sanderson[105764]: --> relative data size: 1.0
Nov 24 18:23:11 compute-0 gallant_sanderson[105764]: --> All data devices are unavailable
Nov 24 18:23:11 compute-0 systemd[1]: libpod-53cd5049445d7396daeb9157508ead3198e855177edf74e03112e9b6dd50cb75.scope: Deactivated successfully.
Nov 24 18:23:11 compute-0 systemd[1]: libpod-53cd5049445d7396daeb9157508ead3198e855177edf74e03112e9b6dd50cb75.scope: Consumed 1.011s CPU time.
Nov 24 18:23:11 compute-0 podman[105748]: 2025-11-24 18:23:11.59133795 +0000 UTC m=+1.293407563 container died 53cd5049445d7396daeb9157508ead3198e855177edf74e03112e9b6dd50cb75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sanderson, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:23:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfcbefd78237e9028b78a198f93e1886aa87c6c1810b6ffc8f4f2e2d91d3b92a-merged.mount: Deactivated successfully.
Nov 24 18:23:11 compute-0 podman[105748]: 2025-11-24 18:23:11.652228841 +0000 UTC m=+1.354298474 container remove 53cd5049445d7396daeb9157508ead3198e855177edf74e03112e9b6dd50cb75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sanderson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Nov 24 18:23:11 compute-0 systemd[1]: libpod-conmon-53cd5049445d7396daeb9157508ead3198e855177edf74e03112e9b6dd50cb75.scope: Deactivated successfully.
Nov 24 18:23:11 compute-0 sudo[105622]: pam_unix(sudo:session): session closed for user root
Nov 24 18:23:11 compute-0 sudo[105806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:23:11 compute-0 sudo[105806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:23:11 compute-0 sudo[105806]: pam_unix(sudo:session): session closed for user root
Nov 24 18:23:11 compute-0 sudo[105831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:23:11 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Nov 24 18:23:11 compute-0 sudo[105831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:23:11 compute-0 sudo[105831]: pam_unix(sudo:session): session closed for user root
Nov 24 18:23:11 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Nov 24 18:23:11 compute-0 sudo[105856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:23:11 compute-0 sudo[105856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:23:11 compute-0 sudo[105856]: pam_unix(sudo:session): session closed for user root
Nov 24 18:23:11 compute-0 sudo[105881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- lvm list --format json
Nov 24 18:23:11 compute-0 sudo[105881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:23:12 compute-0 podman[105947]: 2025-11-24 18:23:12.252563623 +0000 UTC m=+0.040967886 container create c97f4479c956ade8b6a023296f68ea081f4ec856dd21e4ce92939f25132f8cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_yalow, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:23:12 compute-0 systemd[1]: Started libpod-conmon-c97f4479c956ade8b6a023296f68ea081f4ec856dd21e4ce92939f25132f8cae.scope.
Nov 24 18:23:12 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:23:12 compute-0 podman[105947]: 2025-11-24 18:23:12.237210676 +0000 UTC m=+0.025614959 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:23:12 compute-0 podman[105947]: 2025-11-24 18:23:12.339305298 +0000 UTC m=+0.127709571 container init c97f4479c956ade8b6a023296f68ea081f4ec856dd21e4ce92939f25132f8cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 24 18:23:12 compute-0 podman[105947]: 2025-11-24 18:23:12.349307712 +0000 UTC m=+0.137711985 container start c97f4479c956ade8b6a023296f68ea081f4ec856dd21e4ce92939f25132f8cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_yalow, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:23:12 compute-0 podman[105947]: 2025-11-24 18:23:12.353226654 +0000 UTC m=+0.141630957 container attach c97f4479c956ade8b6a023296f68ea081f4ec856dd21e4ce92939f25132f8cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_yalow, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:23:12 compute-0 exciting_yalow[105963]: 167 167
Nov 24 18:23:12 compute-0 systemd[1]: libpod-c97f4479c956ade8b6a023296f68ea081f4ec856dd21e4ce92939f25132f8cae.scope: Deactivated successfully.
Nov 24 18:23:12 compute-0 conmon[105963]: conmon c97f4479c956ade8b6a0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c97f4479c956ade8b6a023296f68ea081f4ec856dd21e4ce92939f25132f8cae.scope/container/memory.events
Nov 24 18:23:12 compute-0 podman[105947]: 2025-11-24 18:23:12.356812046 +0000 UTC m=+0.145216309 container died c97f4479c956ade8b6a023296f68ea081f4ec856dd21e4ce92939f25132f8cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 24 18:23:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-29cc692344ee60d1a39561138a61074344613325ec7c026efb79d7045cea2e03-merged.mount: Deactivated successfully.
Nov 24 18:23:12 compute-0 podman[105947]: 2025-11-24 18:23:12.402668619 +0000 UTC m=+0.191072882 container remove c97f4479c956ade8b6a023296f68ea081f4ec856dd21e4ce92939f25132f8cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 24 18:23:12 compute-0 systemd[1]: libpod-conmon-c97f4479c956ade8b6a023296f68ea081f4ec856dd21e4ce92939f25132f8cae.scope: Deactivated successfully.
Nov 24 18:23:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Nov 24 18:23:12 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 24 18:23:12 compute-0 ceph-mon[74927]: osdmap e75: 3 total, 3 up, 3 in
Nov 24 18:23:12 compute-0 ceph-mon[74927]: 2.1f scrub starts
Nov 24 18:23:12 compute-0 ceph-mon[74927]: 2.1f scrub ok
Nov 24 18:23:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Nov 24 18:23:12 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Nov 24 18:23:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:23:12 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 76 pg[9.8( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[59,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:12 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 76 pg[9.8( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[59,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:12 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 76 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=0 lpr=76 pi=[66,76)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:12 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 76 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=0 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:12 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 76 pg[9.18( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[59,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:12 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 76 pg[9.18( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[59,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:12 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 76 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=74/67 les/c/f=75/68/0 sis=76) [2] r=0 lpr=76 pi=[67,76)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:12 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 76 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=74/67 les/c/f=75/68/0 sis=76) [2] r=0 lpr=76 pi=[67,76)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:12 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 76 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=0 lpr=76 pi=[66,76)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:12 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 76 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=0 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:12 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 76 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=74/75 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76 pruub=14.997705460s) [2] async=[2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 55'385 active pruub 173.554550171s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:12 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 76 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=74/75 n=6 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76 pruub=14.997131348s) [2] async=[2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 55'385 active pruub 173.554504395s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:12 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 76 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=74/75 n=6 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76 pruub=14.997094154s) [2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 173.554504395s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:12 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 76 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=74/75 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76 pruub=14.997041702s) [2] async=[2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 55'385 active pruub 173.554534912s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:12 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 76 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=74/75 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76 pruub=14.997647285s) [2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 173.554550171s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:12 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 76 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=74/75 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76 pruub=14.996972084s) [2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 173.554534912s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:12 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 76 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=74/75 n=6 ec=59/49 lis/c=74/67 les/c/f=75/68/0 sis=76 pruub=14.996159554s) [2] async=[2] r=-1 lpr=76 pi=[67,76)/1 crt=55'385 mlcod 55'385 active pruub 173.554031372s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:12 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 76 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=0 lpr=76 pi=[66,76)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:12 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 76 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=0 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:12 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 76 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=74/75 n=6 ec=59/49 lis/c=74/67 les/c/f=75/68/0 sis=76 pruub=14.996058464s) [2] r=-1 lpr=76 pi=[67,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 173.554031372s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:12 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 76 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:23:12 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:12 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:12 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:12 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:12 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 76 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=75/76 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:23:12 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 76 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:23:12 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 76 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:23:12 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v176: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:23:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Nov 24 18:23:12 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 24 18:23:12 compute-0 podman[105988]: 2025-11-24 18:23:12.55013205 +0000 UTC m=+0.044664610 container create 2a5e28bd13ec7c474a92501aa4c1c824335a44d043a661002acd7716ba8cb6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wu, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 18:23:12 compute-0 systemd[1]: Started libpod-conmon-2a5e28bd13ec7c474a92501aa4c1c824335a44d043a661002acd7716ba8cb6b4.scope.
Nov 24 18:23:12 compute-0 podman[105988]: 2025-11-24 18:23:12.53148748 +0000 UTC m=+0.026020040 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:23:12 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:23:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e7926b0f807469338b5a3f494d5cb565b5cdfb602f342441744fd20d416658b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:23:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e7926b0f807469338b5a3f494d5cb565b5cdfb602f342441744fd20d416658b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:23:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e7926b0f807469338b5a3f494d5cb565b5cdfb602f342441744fd20d416658b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:23:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e7926b0f807469338b5a3f494d5cb565b5cdfb602f342441744fd20d416658b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:23:12 compute-0 podman[105988]: 2025-11-24 18:23:12.648231458 +0000 UTC m=+0.142764028 container init 2a5e28bd13ec7c474a92501aa4c1c824335a44d043a661002acd7716ba8cb6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 18:23:12 compute-0 podman[105988]: 2025-11-24 18:23:12.656925386 +0000 UTC m=+0.151457926 container start 2a5e28bd13ec7c474a92501aa4c1c824335a44d043a661002acd7716ba8cb6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:23:12 compute-0 podman[105988]: 2025-11-24 18:23:12.659839468 +0000 UTC m=+0.154372028 container attach 2a5e28bd13ec7c474a92501aa4c1c824335a44d043a661002acd7716ba8cb6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:23:13 compute-0 elegant_wu[106005]: {
Nov 24 18:23:13 compute-0 elegant_wu[106005]:     "0": [
Nov 24 18:23:13 compute-0 elegant_wu[106005]:         {
Nov 24 18:23:13 compute-0 elegant_wu[106005]:             "devices": [
Nov 24 18:23:13 compute-0 elegant_wu[106005]:                 "/dev/loop3"
Nov 24 18:23:13 compute-0 elegant_wu[106005]:             ],
Nov 24 18:23:13 compute-0 elegant_wu[106005]:             "lv_name": "ceph_lv0",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:             "lv_size": "21470642176",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:             "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:             "name": "ceph_lv0",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:             "tags": {
Nov 24 18:23:13 compute-0 elegant_wu[106005]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:                 "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:                 "ceph.cluster_name": "ceph",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:                 "ceph.crush_device_class": "",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:                 "ceph.encrypted": "0",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:                 "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:                 "ceph.osd_id": "0",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:                 "ceph.type": "block",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:                 "ceph.vdo": "0"
Nov 24 18:23:13 compute-0 elegant_wu[106005]:             },
Nov 24 18:23:13 compute-0 elegant_wu[106005]:             "type": "block",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:             "vg_name": "ceph_vg0"
Nov 24 18:23:13 compute-0 elegant_wu[106005]:         }
Nov 24 18:23:13 compute-0 elegant_wu[106005]:     ],
Nov 24 18:23:13 compute-0 elegant_wu[106005]:     "1": [
Nov 24 18:23:13 compute-0 elegant_wu[106005]:         {
Nov 24 18:23:13 compute-0 elegant_wu[106005]:             "devices": [
Nov 24 18:23:13 compute-0 elegant_wu[106005]:                 "/dev/loop4"
Nov 24 18:23:13 compute-0 elegant_wu[106005]:             ],
Nov 24 18:23:13 compute-0 elegant_wu[106005]:             "lv_name": "ceph_lv1",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:             "lv_size": "21470642176",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:             "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:             "name": "ceph_lv1",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:             "tags": {
Nov 24 18:23:13 compute-0 elegant_wu[106005]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:                 "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:                 "ceph.cluster_name": "ceph",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:                 "ceph.crush_device_class": "",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:                 "ceph.encrypted": "0",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:                 "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:                 "ceph.osd_id": "1",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:                 "ceph.type": "block",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:                 "ceph.vdo": "0"
Nov 24 18:23:13 compute-0 elegant_wu[106005]:             },
Nov 24 18:23:13 compute-0 elegant_wu[106005]:             "type": "block",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:             "vg_name": "ceph_vg1"
Nov 24 18:23:13 compute-0 elegant_wu[106005]:         }
Nov 24 18:23:13 compute-0 elegant_wu[106005]:     ],
Nov 24 18:23:13 compute-0 elegant_wu[106005]:     "2": [
Nov 24 18:23:13 compute-0 elegant_wu[106005]:         {
Nov 24 18:23:13 compute-0 elegant_wu[106005]:             "devices": [
Nov 24 18:23:13 compute-0 elegant_wu[106005]:                 "/dev/loop5"
Nov 24 18:23:13 compute-0 elegant_wu[106005]:             ],
Nov 24 18:23:13 compute-0 elegant_wu[106005]:             "lv_name": "ceph_lv2",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:             "lv_size": "21470642176",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:             "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:             "name": "ceph_lv2",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:             "tags": {
Nov 24 18:23:13 compute-0 elegant_wu[106005]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:                 "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:                 "ceph.cluster_name": "ceph",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:                 "ceph.crush_device_class": "",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:                 "ceph.encrypted": "0",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:                 "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:                 "ceph.osd_id": "2",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:                 "ceph.type": "block",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:                 "ceph.vdo": "0"
Nov 24 18:23:13 compute-0 elegant_wu[106005]:             },
Nov 24 18:23:13 compute-0 elegant_wu[106005]:             "type": "block",
Nov 24 18:23:13 compute-0 elegant_wu[106005]:             "vg_name": "ceph_vg2"
Nov 24 18:23:13 compute-0 elegant_wu[106005]:         }
Nov 24 18:23:13 compute-0 elegant_wu[106005]:     ]
Nov 24 18:23:13 compute-0 elegant_wu[106005]: }
Nov 24 18:23:13 compute-0 systemd[1]: libpod-2a5e28bd13ec7c474a92501aa4c1c824335a44d043a661002acd7716ba8cb6b4.scope: Deactivated successfully.
Nov 24 18:23:13 compute-0 podman[105988]: 2025-11-24 18:23:13.389546498 +0000 UTC m=+0.884079058 container died 2a5e28bd13ec7c474a92501aa4c1c824335a44d043a661002acd7716ba8cb6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wu, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 18:23:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e7926b0f807469338b5a3f494d5cb565b5cdfb602f342441744fd20d416658b-merged.mount: Deactivated successfully.
Nov 24 18:23:13 compute-0 podman[105988]: 2025-11-24 18:23:13.463914922 +0000 UTC m=+0.958447462 container remove 2a5e28bd13ec7c474a92501aa4c1c824335a44d043a661002acd7716ba8cb6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wu, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:23:13 compute-0 systemd[1]: libpod-conmon-2a5e28bd13ec7c474a92501aa4c1c824335a44d043a661002acd7716ba8cb6b4.scope: Deactivated successfully.
Nov 24 18:23:13 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Nov 24 18:23:13 compute-0 sudo[105881]: pam_unix(sudo:session): session closed for user root
Nov 24 18:23:13 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 24 18:23:13 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Nov 24 18:23:13 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Nov 24 18:23:13 compute-0 ceph-mon[74927]: osdmap e76: 3 total, 3 up, 3 in
Nov 24 18:23:13 compute-0 ceph-mon[74927]: pgmap v176: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:23:13 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 24 18:23:13 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:23:13 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:23:13 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 77 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=0 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:23:13 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 77 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=0 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:23:13 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 77 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=74/67 les/c/f=75/68/0 sis=76) [2] r=0 lpr=76 pi=[67,76)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:23:13 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 77 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=0 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:23:13 compute-0 sudo[106026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:23:13 compute-0 sudo[106026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:23:13 compute-0 sudo[106026]: pam_unix(sudo:session): session closed for user root
Nov 24 18:23:13 compute-0 sudo[106051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:23:13 compute-0 sudo[106051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:23:13 compute-0 sudo[106051]: pam_unix(sudo:session): session closed for user root
Nov 24 18:23:13 compute-0 sudo[106076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:23:13 compute-0 sudo[106076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:23:13 compute-0 sudo[106076]: pam_unix(sudo:session): session closed for user root
Nov 24 18:23:13 compute-0 sudo[106101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- raw list --format json
Nov 24 18:23:13 compute-0 sudo[106101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:23:13 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 5.2 deep-scrub starts
Nov 24 18:23:13 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 5.2 deep-scrub ok
Nov 24 18:23:14 compute-0 podman[106166]: 2025-11-24 18:23:14.003248821 +0000 UTC m=+0.033095651 container create 07a7aee6d2f95b2eccf997a142aca9b11b304c2adb7f7d7e99584651891629fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cannon, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:23:14 compute-0 systemd[1]: Started libpod-conmon-07a7aee6d2f95b2eccf997a142aca9b11b304c2adb7f7d7e99584651891629fb.scope.
Nov 24 18:23:14 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:23:14 compute-0 podman[106166]: 2025-11-24 18:23:14.067121187 +0000 UTC m=+0.096968037 container init 07a7aee6d2f95b2eccf997a142aca9b11b304c2adb7f7d7e99584651891629fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:23:14 compute-0 podman[106166]: 2025-11-24 18:23:14.073029985 +0000 UTC m=+0.102876805 container start 07a7aee6d2f95b2eccf997a142aca9b11b304c2adb7f7d7e99584651891629fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cannon, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 18:23:14 compute-0 podman[106166]: 2025-11-24 18:23:14.07604271 +0000 UTC m=+0.105889560 container attach 07a7aee6d2f95b2eccf997a142aca9b11b304c2adb7f7d7e99584651891629fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cannon, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:23:14 compute-0 intelligent_cannon[106182]: 167 167
Nov 24 18:23:14 compute-0 systemd[1]: libpod-07a7aee6d2f95b2eccf997a142aca9b11b304c2adb7f7d7e99584651891629fb.scope: Deactivated successfully.
Nov 24 18:23:14 compute-0 podman[106166]: 2025-11-24 18:23:14.077387878 +0000 UTC m=+0.107234708 container died 07a7aee6d2f95b2eccf997a142aca9b11b304c2adb7f7d7e99584651891629fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cannon, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:23:14 compute-0 podman[106166]: 2025-11-24 18:23:13.990135298 +0000 UTC m=+0.019982148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:23:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-162718e8c0ea29864fdb2c418d50c4229f1fef2670303e08c21c434658d1df3d-merged.mount: Deactivated successfully.
Nov 24 18:23:14 compute-0 podman[106166]: 2025-11-24 18:23:14.11579582 +0000 UTC m=+0.145642650 container remove 07a7aee6d2f95b2eccf997a142aca9b11b304c2adb7f7d7e99584651891629fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cannon, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 24 18:23:14 compute-0 systemd[1]: libpod-conmon-07a7aee6d2f95b2eccf997a142aca9b11b304c2adb7f7d7e99584651891629fb.scope: Deactivated successfully.
Nov 24 18:23:14 compute-0 podman[106205]: 2025-11-24 18:23:14.26920722 +0000 UTC m=+0.054802088 container create a05349e3d974411bece9097075365118b3de089803718567dcc1c80a5197b943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:23:14 compute-0 systemd[1]: Started libpod-conmon-a05349e3d974411bece9097075365118b3de089803718567dcc1c80a5197b943.scope.
Nov 24 18:23:14 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:23:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/360b6198dd933be1f37ae597274bae1b31bccce54ff6244f3a9cedcfb1705d5d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:23:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/360b6198dd933be1f37ae597274bae1b31bccce54ff6244f3a9cedcfb1705d5d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:23:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/360b6198dd933be1f37ae597274bae1b31bccce54ff6244f3a9cedcfb1705d5d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:23:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/360b6198dd933be1f37ae597274bae1b31bccce54ff6244f3a9cedcfb1705d5d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:23:14 compute-0 podman[106205]: 2025-11-24 18:23:14.243692465 +0000 UTC m=+0.029287423 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:23:14 compute-0 podman[106205]: 2025-11-24 18:23:14.344792279 +0000 UTC m=+0.130387187 container init a05349e3d974411bece9097075365118b3de089803718567dcc1c80a5197b943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_sanderson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:23:14 compute-0 podman[106205]: 2025-11-24 18:23:14.353664201 +0000 UTC m=+0.139259119 container start a05349e3d974411bece9097075365118b3de089803718567dcc1c80a5197b943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_sanderson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:23:14 compute-0 podman[106205]: 2025-11-24 18:23:14.359085245 +0000 UTC m=+0.144680123 container attach a05349e3d974411bece9097075365118b3de089803718567dcc1c80a5197b943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:23:14 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Nov 24 18:23:14 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v178: 321 pgs: 2 active+remapped, 4 peering, 315 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 244 B/s, 12 objects/s recovering
Nov 24 18:23:14 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Nov 24 18:23:14 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Nov 24 18:23:14 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 24 18:23:14 compute-0 ceph-mon[74927]: osdmap e77: 3 total, 3 up, 3 in
Nov 24 18:23:14 compute-0 ceph-mon[74927]: 5.2 deep-scrub starts
Nov 24 18:23:14 compute-0 ceph-mon[74927]: 5.2 deep-scrub ok
Nov 24 18:23:14 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78 pruub=14.984064102s) [2] async=[2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 169.117111206s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:14 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78 pruub=14.983811378s) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 169.117111206s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:14 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78 pruub=14.983452797s) [2] async=[2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 169.117080688s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:14 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78 pruub=14.983373642s) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 169.117080688s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:14 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 78 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:14 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 78 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:14 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 78 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:14 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 78 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:15 compute-0 zealous_sanderson[106223]: {
Nov 24 18:23:15 compute-0 zealous_sanderson[106223]:     "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 18:23:15 compute-0 zealous_sanderson[106223]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:23:15 compute-0 zealous_sanderson[106223]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 18:23:15 compute-0 zealous_sanderson[106223]:         "osd_id": 0,
Nov 24 18:23:15 compute-0 zealous_sanderson[106223]:         "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:23:15 compute-0 zealous_sanderson[106223]:         "type": "bluestore"
Nov 24 18:23:15 compute-0 zealous_sanderson[106223]:     },
Nov 24 18:23:15 compute-0 zealous_sanderson[106223]:     "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 18:23:15 compute-0 zealous_sanderson[106223]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:23:15 compute-0 zealous_sanderson[106223]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 18:23:15 compute-0 zealous_sanderson[106223]:         "osd_id": 1,
Nov 24 18:23:15 compute-0 zealous_sanderson[106223]:         "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:23:15 compute-0 zealous_sanderson[106223]:         "type": "bluestore"
Nov 24 18:23:15 compute-0 zealous_sanderson[106223]:     },
Nov 24 18:23:15 compute-0 zealous_sanderson[106223]:     "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 18:23:15 compute-0 zealous_sanderson[106223]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:23:15 compute-0 zealous_sanderson[106223]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 18:23:15 compute-0 zealous_sanderson[106223]:         "osd_id": 2,
Nov 24 18:23:15 compute-0 zealous_sanderson[106223]:         "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:23:15 compute-0 zealous_sanderson[106223]:         "type": "bluestore"
Nov 24 18:23:15 compute-0 zealous_sanderson[106223]:     }
Nov 24 18:23:15 compute-0 zealous_sanderson[106223]: }
Nov 24 18:23:15 compute-0 systemd[1]: libpod-a05349e3d974411bece9097075365118b3de089803718567dcc1c80a5197b943.scope: Deactivated successfully.
Nov 24 18:23:15 compute-0 systemd[1]: libpod-a05349e3d974411bece9097075365118b3de089803718567dcc1c80a5197b943.scope: Consumed 1.019s CPU time.
Nov 24 18:23:15 compute-0 podman[106205]: 2025-11-24 18:23:15.371013066 +0000 UTC m=+1.156607984 container died a05349e3d974411bece9097075365118b3de089803718567dcc1c80a5197b943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_sanderson, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:23:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-360b6198dd933be1f37ae597274bae1b31bccce54ff6244f3a9cedcfb1705d5d-merged.mount: Deactivated successfully.
Nov 24 18:23:15 compute-0 podman[106205]: 2025-11-24 18:23:15.434383618 +0000 UTC m=+1.219978496 container remove a05349e3d974411bece9097075365118b3de089803718567dcc1c80a5197b943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_sanderson, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 18:23:15 compute-0 systemd[1]: libpod-conmon-a05349e3d974411bece9097075365118b3de089803718567dcc1c80a5197b943.scope: Deactivated successfully.
Nov 24 18:23:15 compute-0 sudo[106101]: pam_unix(sudo:session): session closed for user root
Nov 24 18:23:15 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:23:15 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:23:15 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:23:15 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:23:15 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 3d17801c-bfd8-4722-84a9-679708a590ba does not exist
Nov 24 18:23:15 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 4b43a132-a143-4918-b9f0-1bc6124d620f does not exist
Nov 24 18:23:15 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Nov 24 18:23:15 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Nov 24 18:23:15 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Nov 24 18:23:15 compute-0 ceph-mon[74927]: pgmap v178: 321 pgs: 2 active+remapped, 4 peering, 315 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 244 B/s, 12 objects/s recovering
Nov 24 18:23:15 compute-0 ceph-mon[74927]: osdmap e78: 3 total, 3 up, 3 in
Nov 24 18:23:15 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:23:15 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:23:15 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 79 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=78/79 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:23:15 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 79 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=78/79 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:23:15 compute-0 sudo[106270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:23:15 compute-0 sudo[106270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:23:15 compute-0 sudo[106270]: pam_unix(sudo:session): session closed for user root
Nov 24 18:23:15 compute-0 sudo[106295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:23:15 compute-0 sudo[106295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:23:15 compute-0 sudo[106295]: pam_unix(sudo:session): session closed for user root
Nov 24 18:23:15 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Nov 24 18:23:15 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Nov 24 18:23:15 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Nov 24 18:23:16 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Nov 24 18:23:16 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Nov 24 18:23:16 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Nov 24 18:23:16 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v181: 321 pgs: 2 active+remapped, 4 peering, 315 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 246 B/s, 12 objects/s recovering
Nov 24 18:23:16 compute-0 ceph-mon[74927]: osdmap e79: 3 total, 3 up, 3 in
Nov 24 18:23:16 compute-0 ceph-mon[74927]: 5.3 scrub starts
Nov 24 18:23:16 compute-0 ceph-mon[74927]: 5.3 scrub ok
Nov 24 18:23:16 compute-0 ceph-mon[74927]: 6.8 scrub starts
Nov 24 18:23:16 compute-0 ceph-mon[74927]: 6.8 scrub ok
Nov 24 18:23:16 compute-0 ceph-mon[74927]: 6.1 scrub starts
Nov 24 18:23:16 compute-0 ceph-mon[74927]: 6.1 scrub ok
Nov 24 18:23:16 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.e scrub starts
Nov 24 18:23:16 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.e scrub ok
Nov 24 18:23:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:23:17 compute-0 ceph-mon[74927]: pgmap v181: 321 pgs: 2 active+remapped, 4 peering, 315 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 246 B/s, 12 objects/s recovering
Nov 24 18:23:17 compute-0 ceph-mon[74927]: 4.e scrub starts
Nov 24 18:23:17 compute-0 ceph-mon[74927]: 4.e scrub ok
Nov 24 18:23:17 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.b scrub starts
Nov 24 18:23:17 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.b scrub ok
Nov 24 18:23:18 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v182: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 164 B/s, 8 objects/s recovering
Nov 24 18:23:18 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Nov 24 18:23:18 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 24 18:23:18 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Nov 24 18:23:18 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 24 18:23:18 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Nov 24 18:23:18 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Nov 24 18:23:18 compute-0 ceph-mon[74927]: 2.b scrub starts
Nov 24 18:23:18 compute-0 ceph-mon[74927]: 2.b scrub ok
Nov 24 18:23:18 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 24 18:23:19 compute-0 ceph-mon[74927]: pgmap v182: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 164 B/s, 8 objects/s recovering
Nov 24 18:23:19 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 24 18:23:19 compute-0 ceph-mon[74927]: osdmap e80: 3 total, 3 up, 3 in
Nov 24 18:23:19 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Nov 24 18:23:19 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Nov 24 18:23:20 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Nov 24 18:23:20 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Nov 24 18:23:20 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v184: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:23:20 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Nov 24 18:23:20 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 24 18:23:20 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Nov 24 18:23:20 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 24 18:23:20 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Nov 24 18:23:20 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Nov 24 18:23:20 compute-0 ceph-mon[74927]: 2.8 scrub starts
Nov 24 18:23:20 compute-0 ceph-mon[74927]: 2.8 scrub ok
Nov 24 18:23:20 compute-0 ceph-mon[74927]: 4.1b scrub starts
Nov 24 18:23:20 compute-0 ceph-mon[74927]: 4.1b scrub ok
Nov 24 18:23:20 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 24 18:23:21 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Nov 24 18:23:21 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Nov 24 18:23:21 compute-0 ceph-mon[74927]: pgmap v184: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:23:21 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 24 18:23:21 compute-0 ceph-mon[74927]: osdmap e81: 3 total, 3 up, 3 in
Nov 24 18:23:21 compute-0 ceph-mon[74927]: 4.5 scrub starts
Nov 24 18:23:21 compute-0 ceph-mon[74927]: 4.5 scrub ok
Nov 24 18:23:21 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.16 deep-scrub starts
Nov 24 18:23:21 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.16 deep-scrub ok
Nov 24 18:23:22 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Nov 24 18:23:22 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Nov 24 18:23:22 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Nov 24 18:23:22 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Nov 24 18:23:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:23:22 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v186: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:23:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Nov 24 18:23:22 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 24 18:23:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Nov 24 18:23:22 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 24 18:23:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Nov 24 18:23:22 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Nov 24 18:23:22 compute-0 ceph-mon[74927]: 2.16 deep-scrub starts
Nov 24 18:23:22 compute-0 ceph-mon[74927]: 2.16 deep-scrub ok
Nov 24 18:23:22 compute-0 ceph-mon[74927]: 4.1a scrub starts
Nov 24 18:23:22 compute-0 ceph-mon[74927]: 4.1a scrub ok
Nov 24 18:23:22 compute-0 ceph-mon[74927]: 4.4 scrub starts
Nov 24 18:23:22 compute-0 ceph-mon[74927]: 4.4 scrub ok
Nov 24 18:23:22 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 24 18:23:23 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Nov 24 18:23:23 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Nov 24 18:23:23 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Nov 24 18:23:23 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Nov 24 18:23:23 compute-0 ceph-mon[74927]: pgmap v186: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:23:23 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 24 18:23:23 compute-0 ceph-mon[74927]: osdmap e82: 3 total, 3 up, 3 in
Nov 24 18:23:23 compute-0 ceph-mon[74927]: 4.1c scrub starts
Nov 24 18:23:23 compute-0 ceph-mon[74927]: 4.1c scrub ok
Nov 24 18:23:24 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.4 deep-scrub starts
Nov 24 18:23:24 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.4 deep-scrub ok
Nov 24 18:23:24 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v188: 321 pgs: 1 active+clean+scrubbing+deep, 320 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:23:24 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Nov 24 18:23:24 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 24 18:23:24 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 5.14 deep-scrub starts
Nov 24 18:23:24 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 5.14 deep-scrub ok
Nov 24 18:23:24 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Nov 24 18:23:24 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 24 18:23:24 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Nov 24 18:23:24 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Nov 24 18:23:24 compute-0 ceph-mon[74927]: 5.15 scrub starts
Nov 24 18:23:24 compute-0 ceph-mon[74927]: 5.15 scrub ok
Nov 24 18:23:24 compute-0 ceph-mon[74927]: 6.4 deep-scrub starts
Nov 24 18:23:24 compute-0 ceph-mon[74927]: 6.4 deep-scrub ok
Nov 24 18:23:24 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 24 18:23:24 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 82 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82 pruub=15.120257378s) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 179.617523193s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:24 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 83 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82 pruub=15.120175362s) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.617523193s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:24 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 82 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82 pruub=15.119441986s) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 179.617889404s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:24 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 83 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82 pruub=15.119354248s) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.617889404s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:24 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:24 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:24 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Nov 24 18:23:24 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Nov 24 18:23:25 compute-0 sshd-session[106320]: Accepted publickey for zuul from 192.168.122.30 port 35388 ssh2: ECDSA SHA256:jrbRhTO0vQ5011EeK1ZbrK2vK+fcQyIDL9kUBqDHBYY
Nov 24 18:23:25 compute-0 systemd-logind[822]: New session 34 of user zuul.
Nov 24 18:23:25 compute-0 systemd[1]: Started Session 34 of User zuul.
Nov 24 18:23:25 compute-0 sshd-session[106320]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 18:23:25 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Nov 24 18:23:25 compute-0 ceph-mon[74927]: pgmap v188: 321 pgs: 1 active+clean+scrubbing+deep, 320 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:23:25 compute-0 ceph-mon[74927]: 5.14 deep-scrub starts
Nov 24 18:23:25 compute-0 ceph-mon[74927]: 5.14 deep-scrub ok
Nov 24 18:23:25 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 24 18:23:25 compute-0 ceph-mon[74927]: osdmap e83: 3 total, 3 up, 3 in
Nov 24 18:23:25 compute-0 ceph-mon[74927]: 6.1f scrub starts
Nov 24 18:23:25 compute-0 ceph-mon[74927]: 6.1f scrub ok
Nov 24 18:23:25 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Nov 24 18:23:25 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Nov 24 18:23:25 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:25 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:25 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:25 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:25 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:25 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Nov 24 18:23:25 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:25 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:25 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:25 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Nov 24 18:23:26 compute-0 python3.9[106473]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 24 18:23:26 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v191: 321 pgs: 1 active+clean+scrubbing+deep, 320 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:23:26 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Nov 24 18:23:26 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 24 18:23:26 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Nov 24 18:23:26 compute-0 ceph-mon[74927]: osdmap e84: 3 total, 3 up, 3 in
Nov 24 18:23:26 compute-0 ceph-mon[74927]: 7.1c scrub starts
Nov 24 18:23:26 compute-0 ceph-mon[74927]: 7.1c scrub ok
Nov 24 18:23:26 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 24 18:23:26 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 24 18:23:26 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Nov 24 18:23:26 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Nov 24 18:23:26 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Nov 24 18:23:26 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Nov 24 18:23:27 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:23:27 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:23:27 compute-0 python3.9[106647]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:23:27 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.b deep-scrub starts
Nov 24 18:23:27 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.b deep-scrub ok
Nov 24 18:23:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:23:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Nov 24 18:23:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Nov 24 18:23:27 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Nov 24 18:23:27 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86 pruub=15.439636230s) [2] async=[2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 182.619369507s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:27 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86 pruub=15.439560890s) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.619369507s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:27 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86 pruub=15.439293861s) [2] async=[2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 182.619827271s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:27 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86 pruub=15.439209938s) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.619827271s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:27 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:27 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:27 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:27 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:27 compute-0 ceph-mon[74927]: pgmap v191: 321 pgs: 1 active+clean+scrubbing+deep, 320 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:23:27 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 24 18:23:27 compute-0 ceph-mon[74927]: osdmap e85: 3 total, 3 up, 3 in
Nov 24 18:23:27 compute-0 ceph-mon[74927]: 3.18 scrub starts
Nov 24 18:23:27 compute-0 ceph-mon[74927]: 3.18 scrub ok
Nov 24 18:23:27 compute-0 ceph-mon[74927]: 6.b deep-scrub starts
Nov 24 18:23:27 compute-0 ceph-mon[74927]: 6.b deep-scrub ok
Nov 24 18:23:27 compute-0 ceph-mon[74927]: osdmap e86: 3 total, 3 up, 3 in
Nov 24 18:23:28 compute-0 sudo[106801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwrgkanentoodmxmtawbjwntqcvtyofo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008607.7927232-45-188553572588584/AnsiballZ_command.py'
Nov 24 18:23:28 compute-0 sudo[106801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:23:28 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Nov 24 18:23:28 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Nov 24 18:23:28 compute-0 python3.9[106803]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:23:28 compute-0 sudo[106801]: pam_unix(sudo:session): session closed for user root
Nov 24 18:23:28 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v194: 321 pgs: 2 peering, 1 active+clean+scrubbing+deep, 318 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 2 objects/s recovering
Nov 24 18:23:28 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Nov 24 18:23:28 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Nov 24 18:23:28 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Nov 24 18:23:28 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 87 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:23:28 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 87 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:23:28 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Nov 24 18:23:28 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Nov 24 18:23:29 compute-0 ceph-mon[74927]: 4.7 scrub starts
Nov 24 18:23:29 compute-0 ceph-mon[74927]: 4.7 scrub ok
Nov 24 18:23:29 compute-0 ceph-mon[74927]: osdmap e87: 3 total, 3 up, 3 in
Nov 24 18:23:29 compute-0 sudo[106954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuljplriazjvytumywisbisevmnuqman ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008608.7444043-57-234900947139477/AnsiballZ_stat.py'
Nov 24 18:23:29 compute-0 sudo[106954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:23:29 compute-0 python3.9[106956]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:23:29 compute-0 sudo[106954]: pam_unix(sudo:session): session closed for user root
Nov 24 18:23:29 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Nov 24 18:23:29 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Nov 24 18:23:29 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Nov 24 18:23:29 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Nov 24 18:23:29 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Nov 24 18:23:29 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Nov 24 18:23:29 compute-0 sudo[107108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpcedkvzvlqdstmnkpwvujmnwbodifch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008609.5867968-68-787016955824/AnsiballZ_file.py'
Nov 24 18:23:29 compute-0 sudo[107108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:23:30 compute-0 ceph-mon[74927]: pgmap v194: 321 pgs: 2 peering, 1 active+clean+scrubbing+deep, 318 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 2 objects/s recovering
Nov 24 18:23:30 compute-0 ceph-mon[74927]: 2.13 scrub starts
Nov 24 18:23:30 compute-0 ceph-mon[74927]: 2.13 scrub ok
Nov 24 18:23:30 compute-0 ceph-mon[74927]: 4.8 scrub starts
Nov 24 18:23:30 compute-0 ceph-mon[74927]: 4.8 scrub ok
Nov 24 18:23:30 compute-0 ceph-mon[74927]: 2.11 scrub starts
Nov 24 18:23:30 compute-0 ceph-mon[74927]: 2.11 scrub ok
Nov 24 18:23:30 compute-0 python3.9[107110]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:23:30 compute-0 sudo[107108]: pam_unix(sudo:session): session closed for user root
Nov 24 18:23:30 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v196: 321 pgs: 1 active+clean+scrubbing, 2 peering, 318 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 23 B/s, 2 objects/s recovering
Nov 24 18:23:30 compute-0 sudo[107260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqbnqomxlcvfiodmscmxodxctuqxbgql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008610.3857455-77-254846226840068/AnsiballZ_file.py'
Nov 24 18:23:30 compute-0 sudo[107260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:23:30 compute-0 python3.9[107262]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:23:30 compute-0 sudo[107260]: pam_unix(sudo:session): session closed for user root
Nov 24 18:23:31 compute-0 ceph-mon[74927]: 7.15 scrub starts
Nov 24 18:23:31 compute-0 ceph-mon[74927]: 7.15 scrub ok
Nov 24 18:23:31 compute-0 python3.9[107412]: ansible-ansible.builtin.service_facts Invoked
Nov 24 18:23:31 compute-0 network[107429]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 18:23:31 compute-0 network[107430]: 'network-scripts' will be removed from distribution in near future.
Nov 24 18:23:31 compute-0 network[107431]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 18:23:32 compute-0 ceph-mon[74927]: pgmap v196: 321 pgs: 1 active+clean+scrubbing, 2 peering, 318 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 23 B/s, 2 objects/s recovering
Nov 24 18:23:32 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:23:32 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v197: 321 pgs: 1 active+clean+scrubbing, 2 peering, 318 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Nov 24 18:23:33 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.1c scrub starts
Nov 24 18:23:33 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.1c scrub ok
Nov 24 18:23:33 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Nov 24 18:23:33 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Nov 24 18:23:33 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.16 deep-scrub starts
Nov 24 18:23:33 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.16 deep-scrub ok
Nov 24 18:23:34 compute-0 ceph-mon[74927]: pgmap v197: 321 pgs: 1 active+clean+scrubbing, 2 peering, 318 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Nov 24 18:23:34 compute-0 ceph-mon[74927]: 6.1c scrub starts
Nov 24 18:23:34 compute-0 ceph-mon[74927]: 6.1c scrub ok
Nov 24 18:23:34 compute-0 ceph-mon[74927]: 2.18 scrub starts
Nov 24 18:23:34 compute-0 ceph-mon[74927]: 2.18 scrub ok
Nov 24 18:23:34 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Nov 24 18:23:34 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Nov 24 18:23:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:23:34
Nov 24 18:23:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:23:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Some PGs (0.006231) are inactive; try again later
Nov 24 18:23:34 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v198: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 1 objects/s recovering
Nov 24 18:23:34 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 24 18:23:34 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 24 18:23:34 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Nov 24 18:23:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:23:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:23:34 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Nov 24 18:23:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:23:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:23:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:23:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:23:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:23:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:23:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:23:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:23:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:23:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:23:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:23:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:23:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:23:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:23:34 compute-0 python3.9[107691]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:23:35 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Nov 24 18:23:35 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 24 18:23:35 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Nov 24 18:23:35 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Nov 24 18:23:35 compute-0 ceph-mon[74927]: 3.16 deep-scrub starts
Nov 24 18:23:35 compute-0 ceph-mon[74927]: 3.16 deep-scrub ok
Nov 24 18:23:35 compute-0 ceph-mon[74927]: 6.6 scrub starts
Nov 24 18:23:35 compute-0 ceph-mon[74927]: 6.6 scrub ok
Nov 24 18:23:35 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 24 18:23:35 compute-0 ceph-mon[74927]: 3.17 scrub starts
Nov 24 18:23:35 compute-0 ceph-mon[74927]: 3.17 scrub ok
Nov 24 18:23:35 compute-0 python3.9[107841]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:23:36 compute-0 ceph-mon[74927]: pgmap v198: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 1 objects/s recovering
Nov 24 18:23:36 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 24 18:23:36 compute-0 ceph-mon[74927]: osdmap e88: 3 total, 3 up, 3 in
Nov 24 18:23:36 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v200: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:23:36 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Nov 24 18:23:36 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 24 18:23:36 compute-0 python3.9[107995]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:23:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Nov 24 18:23:37 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 24 18:23:37 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 24 18:23:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Nov 24 18:23:37 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Nov 24 18:23:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:23:37 compute-0 sudo[108151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vubnvvwvmouwmzynnmspvrcdkfvhiolg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008617.313889-125-94958595845381/AnsiballZ_setup.py'
Nov 24 18:23:37 compute-0 sudo[108151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:23:37 compute-0 python3.9[108153]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 18:23:38 compute-0 ceph-mon[74927]: pgmap v200: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:23:38 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 24 18:23:38 compute-0 ceph-mon[74927]: osdmap e89: 3 total, 3 up, 3 in
Nov 24 18:23:38 compute-0 sudo[108151]: pam_unix(sudo:session): session closed for user root
Nov 24 18:23:38 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.1e scrub starts
Nov 24 18:23:38 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.1e scrub ok
Nov 24 18:23:38 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v202: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:23:38 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Nov 24 18:23:38 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 24 18:23:38 compute-0 sudo[108235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrejvxjepwrbbmjemngayzrrruunjbwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008617.313889-125-94958595845381/AnsiballZ_dnf.py'
Nov 24 18:23:38 compute-0 sudo[108235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:23:38 compute-0 python3.9[108237]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 18:23:39 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Nov 24 18:23:39 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 24 18:23:39 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Nov 24 18:23:39 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Nov 24 18:23:39 compute-0 ceph-mon[74927]: 6.1e scrub starts
Nov 24 18:23:39 compute-0 ceph-mon[74927]: 6.1e scrub ok
Nov 24 18:23:39 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 24 18:23:39 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Nov 24 18:23:39 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Nov 24 18:23:40 compute-0 ceph-mon[74927]: pgmap v202: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:23:40 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 24 18:23:40 compute-0 ceph-mon[74927]: osdmap e90: 3 total, 3 up, 3 in
Nov 24 18:23:40 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v204: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:23:40 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Nov 24 18:23:40 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 24 18:23:41 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Nov 24 18:23:41 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 24 18:23:41 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Nov 24 18:23:41 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Nov 24 18:23:41 compute-0 ceph-mon[74927]: 7.11 scrub starts
Nov 24 18:23:41 compute-0 ceph-mon[74927]: 7.11 scrub ok
Nov 24 18:23:41 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 24 18:23:41 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Nov 24 18:23:41 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Nov 24 18:23:42 compute-0 ceph-mon[74927]: pgmap v204: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:23:42 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 24 18:23:42 compute-0 ceph-mon[74927]: osdmap e91: 3 total, 3 up, 3 in
Nov 24 18:23:42 compute-0 ceph-mon[74927]: 6.1d scrub starts
Nov 24 18:23:42 compute-0 ceph-mon[74927]: 6.1d scrub ok
Nov 24 18:23:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:23:42 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v206: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:23:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Nov 24 18:23:42 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 24 18:23:42 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.f deep-scrub starts
Nov 24 18:23:42 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.f deep-scrub ok
Nov 24 18:23:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:23:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:23:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:23:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:23:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:23:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:23:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:23:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:23:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:23:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:23:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:23:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:23:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 18:23:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:23:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:23:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:23:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 18:23:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:23:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 18:23:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:23:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:23:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:23:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 18:23:43 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Nov 24 18:23:43 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 24 18:23:43 compute-0 ceph-mon[74927]: 3.f deep-scrub starts
Nov 24 18:23:43 compute-0 ceph-mon[74927]: 3.f deep-scrub ok
Nov 24 18:23:43 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 24 18:23:43 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Nov 24 18:23:43 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Nov 24 18:23:43 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 92 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92 pruub=9.081287384s) [2] r=-1 lpr=92 pi=[67,92)/1 crt=55'385 mlcod 0'0 active pruub 198.269866943s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:43 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 92 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92 pruub=9.081098557s) [2] r=-1 lpr=92 pi=[67,92)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 198.269866943s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:43 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:43 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Nov 24 18:23:43 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Nov 24 18:23:44 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Nov 24 18:23:44 compute-0 ceph-mon[74927]: pgmap v206: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:23:44 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 24 18:23:44 compute-0 ceph-mon[74927]: osdmap e92: 3 total, 3 up, 3 in
Nov 24 18:23:44 compute-0 ceph-mon[74927]: 4.9 scrub starts
Nov 24 18:23:44 compute-0 ceph-mon[74927]: 4.9 scrub ok
Nov 24 18:23:44 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Nov 24 18:23:44 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Nov 24 18:23:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 93 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] r=0 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:44 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 93 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] r=0 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:44 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:44 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Nov 24 18:23:44 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Nov 24 18:23:44 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v209: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:23:45 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Nov 24 18:23:45 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Nov 24 18:23:45 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Nov 24 18:23:45 compute-0 ceph-mon[74927]: osdmap e93: 3 total, 3 up, 3 in
Nov 24 18:23:45 compute-0 ceph-mon[74927]: 5.18 scrub starts
Nov 24 18:23:45 compute-0 ceph-mon[74927]: 5.18 scrub ok
Nov 24 18:23:45 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 94 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] async=[2] r=0 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:23:45 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.c scrub starts
Nov 24 18:23:45 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.c scrub ok
Nov 24 18:23:45 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.e scrub starts
Nov 24 18:23:45 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.e scrub ok
Nov 24 18:23:46 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Nov 24 18:23:46 compute-0 ceph-mon[74927]: pgmap v209: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:23:46 compute-0 ceph-mon[74927]: osdmap e94: 3 total, 3 up, 3 in
Nov 24 18:23:46 compute-0 ceph-mon[74927]: 3.c scrub starts
Nov 24 18:23:46 compute-0 ceph-mon[74927]: 3.c scrub ok
Nov 24 18:23:46 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Nov 24 18:23:46 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Nov 24 18:23:46 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:46 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:46 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95 pruub=14.975742340s) [2] async=[2] r=-1 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 55'385 active pruub 207.216949463s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:46 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95 pruub=14.974552155s) [2] r=-1 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 207.216949463s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:46 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v212: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:23:46 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.f scrub starts
Nov 24 18:23:46 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.f scrub ok
Nov 24 18:23:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Nov 24 18:23:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Nov 24 18:23:47 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Nov 24 18:23:47 compute-0 ceph-mon[74927]: 3.e scrub starts
Nov 24 18:23:47 compute-0 ceph-mon[74927]: 3.e scrub ok
Nov 24 18:23:47 compute-0 ceph-mon[74927]: osdmap e95: 3 total, 3 up, 3 in
Nov 24 18:23:47 compute-0 ceph-mon[74927]: pgmap v212: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:23:47 compute-0 ceph-mon[74927]: 7.f scrub starts
Nov 24 18:23:47 compute-0 ceph-mon[74927]: 7.f scrub ok
Nov 24 18:23:47 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 96 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=95/96 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:23:47 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Nov 24 18:23:47 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Nov 24 18:23:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:23:47 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Nov 24 18:23:47 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Nov 24 18:23:48 compute-0 ceph-mon[74927]: osdmap e96: 3 total, 3 up, 3 in
Nov 24 18:23:48 compute-0 ceph-mon[74927]: 5.1d scrub starts
Nov 24 18:23:48 compute-0 ceph-mon[74927]: 5.1d scrub ok
Nov 24 18:23:48 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Nov 24 18:23:48 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Nov 24 18:23:48 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v214: 321 pgs: 321 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 233 B/s wr, 7 op/s; 50 B/s, 2 objects/s recovering
Nov 24 18:23:48 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Nov 24 18:23:48 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 24 18:23:48 compute-0 sudo[108235]: pam_unix(sudo:session): session closed for user root
Nov 24 18:23:49 compute-0 sudo[108489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtprbgutsdmsuhebufpwlnxxtchsergs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008628.7920394-137-196227056241367/AnsiballZ_command.py'
Nov 24 18:23:49 compute-0 sudo[108489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:23:49 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Nov 24 18:23:49 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 24 18:23:49 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Nov 24 18:23:49 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Nov 24 18:23:49 compute-0 ceph-mon[74927]: 3.11 scrub starts
Nov 24 18:23:49 compute-0 ceph-mon[74927]: 3.11 scrub ok
Nov 24 18:23:49 compute-0 ceph-mon[74927]: 7.6 scrub starts
Nov 24 18:23:49 compute-0 ceph-mon[74927]: 7.6 scrub ok
Nov 24 18:23:49 compute-0 ceph-mon[74927]: pgmap v214: 321 pgs: 321 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 233 B/s wr, 7 op/s; 50 B/s, 2 objects/s recovering
Nov 24 18:23:49 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 24 18:23:49 compute-0 python3.9[108491]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:23:49 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Nov 24 18:23:49 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Nov 24 18:23:49 compute-0 sudo[108489]: pam_unix(sudo:session): session closed for user root
Nov 24 18:23:50 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 24 18:23:50 compute-0 ceph-mon[74927]: osdmap e97: 3 total, 3 up, 3 in
Nov 24 18:23:50 compute-0 ceph-mon[74927]: 7.4 scrub starts
Nov 24 18:23:50 compute-0 ceph-mon[74927]: 7.4 scrub ok
Nov 24 18:23:50 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Nov 24 18:23:50 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Nov 24 18:23:50 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v216: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 190 B/s wr, 6 op/s; 40 B/s, 1 objects/s recovering
Nov 24 18:23:50 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Nov 24 18:23:50 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 24 18:23:50 compute-0 sudo[108776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjbzldrclppccnlmwswktzdkgpxxzgpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008630.1105487-145-257668531364108/AnsiballZ_selinux.py'
Nov 24 18:23:50 compute-0 sudo[108776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:23:50 compute-0 python3.9[108778]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 24 18:23:51 compute-0 sudo[108776]: pam_unix(sudo:session): session closed for user root
Nov 24 18:23:51 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Nov 24 18:23:51 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 24 18:23:51 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Nov 24 18:23:51 compute-0 ceph-mon[74927]: 5.1a scrub starts
Nov 24 18:23:51 compute-0 ceph-mon[74927]: 5.1a scrub ok
Nov 24 18:23:51 compute-0 ceph-mon[74927]: pgmap v216: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 190 B/s wr, 6 op/s; 40 B/s, 1 objects/s recovering
Nov 24 18:23:51 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 24 18:23:51 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Nov 24 18:23:51 compute-0 sudo[108928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uawhcrmuxrbusyiibnulhmvhkgxgqttc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008631.3958242-156-48882707194948/AnsiballZ_command.py'
Nov 24 18:23:51 compute-0 sudo[108928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:23:51 compute-0 python3.9[108930]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 24 18:23:51 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Nov 24 18:23:51 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Nov 24 18:23:51 compute-0 sudo[108928]: pam_unix(sudo:session): session closed for user root
Nov 24 18:23:52 compute-0 sudo[109080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufmzqsmmaznmlhfeochnmiwnzlristpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008631.9953315-164-140118871409763/AnsiballZ_file.py'
Nov 24 18:23:52 compute-0 sudo[109080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:23:52 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 24 18:23:52 compute-0 ceph-mon[74927]: osdmap e98: 3 total, 3 up, 3 in
Nov 24 18:23:52 compute-0 python3.9[109082]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:23:52 compute-0 sudo[109080]: pam_unix(sudo:session): session closed for user root
Nov 24 18:23:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:23:52 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v218: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 170 B/s wr, 5 op/s; 36 B/s, 1 objects/s recovering
Nov 24 18:23:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Nov 24 18:23:52 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 24 18:23:52 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Nov 24 18:23:52 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Nov 24 18:23:53 compute-0 sudo[109232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xksoqiqzdbhumbxacffxpmizuokbzmwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008632.594105-172-80169433086063/AnsiballZ_mount.py'
Nov 24 18:23:53 compute-0 sudo[109232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:23:53 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Nov 24 18:23:53 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 24 18:23:53 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Nov 24 18:23:53 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Nov 24 18:23:53 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 99 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99 pruub=15.274039268s) [0] r=-1 lpr=99 pi=[75,99)/1 crt=55'385 mlcod 0'0 active pruub 200.589569092s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:53 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 99 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99 pruub=15.273898125s) [0] r=-1 lpr=99 pi=[75,99)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 200.589569092s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:53 compute-0 ceph-mon[74927]: 3.5 scrub starts
Nov 24 18:23:53 compute-0 ceph-mon[74927]: 3.5 scrub ok
Nov 24 18:23:53 compute-0 ceph-mon[74927]: pgmap v218: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 170 B/s wr, 5 op/s; 36 B/s, 1 objects/s recovering
Nov 24 18:23:53 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 24 18:23:53 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 99 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99) [0] r=0 lpr=99 pi=[75,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:53 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 98 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98 pruub=14.972748756s) [1] r=-1 lpr=98 pi=[67,98)/1 crt=55'385 mlcod 0'0 active pruub 214.270065308s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:53 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 99 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98 pruub=14.972686768s) [1] r=-1 lpr=98 pi=[67,98)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 214.270065308s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:53 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=0 lpr=99 pi=[67,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:53 compute-0 python3.9[109234]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 24 18:23:53 compute-0 sudo[109232]: pam_unix(sudo:session): session closed for user root
Nov 24 18:23:53 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.3 deep-scrub starts
Nov 24 18:23:53 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.3 deep-scrub ok
Nov 24 18:23:53 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.a scrub starts
Nov 24 18:23:53 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.a scrub ok
Nov 24 18:23:54 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Nov 24 18:23:54 compute-0 ceph-mon[74927]: 7.8 scrub starts
Nov 24 18:23:54 compute-0 ceph-mon[74927]: 7.8 scrub ok
Nov 24 18:23:54 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 24 18:23:54 compute-0 ceph-mon[74927]: osdmap e99: 3 total, 3 up, 3 in
Nov 24 18:23:54 compute-0 ceph-mon[74927]: 3.3 deep-scrub starts
Nov 24 18:23:54 compute-0 ceph-mon[74927]: 3.3 deep-scrub ok
Nov 24 18:23:54 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Nov 24 18:23:54 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Nov 24 18:23:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:54 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 100 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] r=0 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:54 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 100 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] r=0 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:54 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 100 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] r=-1 lpr=100 pi=[75,100)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:54 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 100 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] r=-1 lpr=100 pi=[75,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.c deep-scrub starts
Nov 24 18:23:54 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 100 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:54 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 100 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.c deep-scrub ok
Nov 24 18:23:54 compute-0 sudo[109384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evkcyzzsxyopkjaghjdnnscaveghoetk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008634.0046093-200-196775031567023/AnsiballZ_file.py'
Nov 24 18:23:54 compute-0 sudo[109384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:23:54 compute-0 python3.9[109386]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:23:54 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Nov 24 18:23:54 compute-0 sudo[109384]: pam_unix(sudo:session): session closed for user root
Nov 24 18:23:54 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Nov 24 18:23:54 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v221: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:23:54 compute-0 sudo[109536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbfydkzdomzljizsairwaywmhipvcinc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008634.6911619-208-227644302087450/AnsiballZ_stat.py'
Nov 24 18:23:54 compute-0 sudo[109536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:23:55 compute-0 python3.9[109538]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:23:55 compute-0 sudo[109536]: pam_unix(sudo:session): session closed for user root
Nov 24 18:23:55 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Nov 24 18:23:55 compute-0 ceph-mon[74927]: 7.a scrub starts
Nov 24 18:23:55 compute-0 ceph-mon[74927]: 7.a scrub ok
Nov 24 18:23:55 compute-0 ceph-mon[74927]: osdmap e100: 3 total, 3 up, 3 in
Nov 24 18:23:55 compute-0 ceph-mon[74927]: 5.c deep-scrub starts
Nov 24 18:23:55 compute-0 ceph-mon[74927]: 5.c deep-scrub ok
Nov 24 18:23:55 compute-0 ceph-mon[74927]: 3.6 scrub starts
Nov 24 18:23:55 compute-0 ceph-mon[74927]: 3.6 scrub ok
Nov 24 18:23:55 compute-0 ceph-mon[74927]: pgmap v221: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:23:55 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Nov 24 18:23:55 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Nov 24 18:23:55 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Nov 24 18:23:55 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 101 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:23:55 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Nov 24 18:23:55 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 101 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] async=[1] r=0 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:23:55 compute-0 sudo[109614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xturvasonaenrizvfboukukeooimelif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008634.6911619-208-227644302087450/AnsiballZ_file.py'
Nov 24 18:23:55 compute-0 sudo[109614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:23:55 compute-0 python3.9[109616]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:23:55 compute-0 sudo[109614]: pam_unix(sudo:session): session closed for user root
Nov 24 18:23:56 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Nov 24 18:23:56 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Nov 24 18:23:56 compute-0 sudo[109766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szyfjnivipwdaujbdyzcgeswtahpthpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008635.9898415-229-47505311617424/AnsiballZ_stat.py'
Nov 24 18:23:56 compute-0 sudo[109766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:23:56 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Nov 24 18:23:56 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102 pruub=15.020028114s) [1] async=[1] r=-1 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 55'385 active pruub 217.350143433s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:56 compute-0 ceph-mon[74927]: osdmap e101: 3 total, 3 up, 3 in
Nov 24 18:23:56 compute-0 ceph-mon[74927]: 2.9 scrub starts
Nov 24 18:23:56 compute-0 ceph-mon[74927]: 2.9 scrub ok
Nov 24 18:23:56 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102 pruub=15.019754410s) [1] r=-1 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 217.350143433s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:56 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:56 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:56 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102 pruub=15.005826950s) [0] async=[0] r=-1 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 55'385 active pruub 203.362533569s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:56 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102 pruub=15.005747795s) [0] r=-1 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 203.362533569s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:23:56 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:23:56 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:23:56 compute-0 python3.9[109768]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:23:56 compute-0 sudo[109766]: pam_unix(sudo:session): session closed for user root
Nov 24 18:23:56 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v224: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:23:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Nov 24 18:23:57 compute-0 ceph-mon[74927]: osdmap e102: 3 total, 3 up, 3 in
Nov 24 18:23:57 compute-0 ceph-mon[74927]: pgmap v224: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:23:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Nov 24 18:23:57 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Nov 24 18:23:57 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 103 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=102/103 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:23:57 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.f scrub starts
Nov 24 18:23:57 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 103 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=102/103 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:23:57 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.f scrub ok
Nov 24 18:23:57 compute-0 sudo[109920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztlrnycxwyckvsuieajyxibitalwsddb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008636.9552085-242-21035284327891/AnsiballZ_getent.py'
Nov 24 18:23:57 compute-0 sudo[109920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:23:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:23:57 compute-0 python3.9[109922]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 24 18:23:57 compute-0 sudo[109920]: pam_unix(sudo:session): session closed for user root
Nov 24 18:23:58 compute-0 sudo[110073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcbbbrypufpqyrnstjiymucuengozbcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008637.8227024-252-13076271294728/AnsiballZ_getent.py'
Nov 24 18:23:58 compute-0 sudo[110073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:23:58 compute-0 python3.9[110075]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 24 18:23:58 compute-0 ceph-mon[74927]: osdmap e103: 3 total, 3 up, 3 in
Nov 24 18:23:58 compute-0 ceph-mon[74927]: 5.f scrub starts
Nov 24 18:23:58 compute-0 ceph-mon[74927]: 5.f scrub ok
Nov 24 18:23:58 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Nov 24 18:23:58 compute-0 sudo[110073]: pam_unix(sudo:session): session closed for user root
Nov 24 18:23:58 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Nov 24 18:23:58 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v226: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:23:58 compute-0 sudo[110226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhfdxbdkpfwcecspatqphyfsesxxwrtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008638.4615653-260-74256323277305/AnsiballZ_group.py'
Nov 24 18:23:58 compute-0 sudo[110226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:23:59 compute-0 python3.9[110228]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 18:23:59 compute-0 sudo[110226]: pam_unix(sudo:session): session closed for user root
Nov 24 18:23:59 compute-0 ceph-mon[74927]: 2.6 scrub starts
Nov 24 18:23:59 compute-0 ceph-mon[74927]: 2.6 scrub ok
Nov 24 18:23:59 compute-0 ceph-mon[74927]: pgmap v226: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:23:59 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Nov 24 18:23:59 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Nov 24 18:23:59 compute-0 sudo[110378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkhulwqnxadormboiqgcsupiaoqhtpyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008639.357402-269-71699826760003/AnsiballZ_file.py'
Nov 24 18:23:59 compute-0 sudo[110378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:23:59 compute-0 python3.9[110380]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 24 18:23:59 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Nov 24 18:23:59 compute-0 sudo[110378]: pam_unix(sudo:session): session closed for user root
Nov 24 18:23:59 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Nov 24 18:24:00 compute-0 ceph-mon[74927]: 7.3 scrub starts
Nov 24 18:24:00 compute-0 ceph-mon[74927]: 7.3 scrub ok
Nov 24 18:24:00 compute-0 sudo[110530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcohumbawzxzlqhloszxbonkanlmhjtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008640.1771924-280-54781203565310/AnsiballZ_dnf.py'
Nov 24 18:24:00 compute-0 sudo[110530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:24:00 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v227: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 170 B/s wr, 5 op/s; 54 B/s, 1 objects/s recovering
Nov 24 18:24:00 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Nov 24 18:24:00 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 24 18:24:00 compute-0 python3.9[110532]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 18:24:01 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Nov 24 18:24:01 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 24 18:24:01 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Nov 24 18:24:01 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Nov 24 18:24:01 compute-0 ceph-mon[74927]: 7.5 scrub starts
Nov 24 18:24:01 compute-0 ceph-mon[74927]: 7.5 scrub ok
Nov 24 18:24:01 compute-0 ceph-mon[74927]: pgmap v227: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 170 B/s wr, 5 op/s; 54 B/s, 1 objects/s recovering
Nov 24 18:24:01 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 24 18:24:01 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Nov 24 18:24:01 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Nov 24 18:24:02 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 24 18:24:02 compute-0 ceph-mon[74927]: osdmap e104: 3 total, 3 up, 3 in
Nov 24 18:24:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:24:02 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v229: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 163 B/s wr, 5 op/s; 52 B/s, 1 objects/s recovering
Nov 24 18:24:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Nov 24 18:24:02 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 24 18:24:03 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Nov 24 18:24:03 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 24 18:24:03 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Nov 24 18:24:03 compute-0 ceph-mon[74927]: 7.1 scrub starts
Nov 24 18:24:03 compute-0 ceph-mon[74927]: 7.1 scrub ok
Nov 24 18:24:03 compute-0 ceph-mon[74927]: pgmap v229: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 163 B/s wr, 5 op/s; 52 B/s, 1 objects/s recovering
Nov 24 18:24:03 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 24 18:24:03 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Nov 24 18:24:03 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.7 deep-scrub starts
Nov 24 18:24:03 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.7 deep-scrub ok
Nov 24 18:24:04 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 24 18:24:04 compute-0 ceph-mon[74927]: osdmap e105: 3 total, 3 up, 3 in
Nov 24 18:24:04 compute-0 ceph-mon[74927]: 2.7 deep-scrub starts
Nov 24 18:24:04 compute-0 ceph-mon[74927]: 2.7 deep-scrub ok
Nov 24 18:24:04 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Nov 24 18:24:04 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Nov 24 18:24:04 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v231: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 141 B/s wr, 4 op/s; 45 B/s, 1 objects/s recovering
Nov 24 18:24:04 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Nov 24 18:24:04 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 24 18:24:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:24:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:24:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:24:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:24:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:24:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:24:04 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Nov 24 18:24:04 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Nov 24 18:24:05 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Nov 24 18:24:05 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 24 18:24:05 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Nov 24 18:24:05 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Nov 24 18:24:05 compute-0 ceph-mon[74927]: 5.1 scrub starts
Nov 24 18:24:05 compute-0 ceph-mon[74927]: 5.1 scrub ok
Nov 24 18:24:05 compute-0 ceph-mon[74927]: pgmap v231: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 141 B/s wr, 4 op/s; 45 B/s, 1 objects/s recovering
Nov 24 18:24:05 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 24 18:24:05 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Nov 24 18:24:05 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Nov 24 18:24:05 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 106 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106 pruub=10.653602600s) [2] r=-1 lpr=106 pi=[67,106)/1 crt=55'385 mlcod 0'0 active pruub 222.269836426s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:24:05 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 106 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106 pruub=10.653409958s) [2] r=-1 lpr=106 pi=[67,106)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 222.269836426s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:24:05 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 106 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=0 lpr=106 pi=[67,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:24:06 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Nov 24 18:24:06 compute-0 ceph-mon[74927]: 7.2 scrub starts
Nov 24 18:24:06 compute-0 ceph-mon[74927]: 7.2 scrub ok
Nov 24 18:24:06 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 24 18:24:06 compute-0 ceph-mon[74927]: osdmap e106: 3 total, 3 up, 3 in
Nov 24 18:24:06 compute-0 ceph-mon[74927]: 2.4 scrub starts
Nov 24 18:24:06 compute-0 ceph-mon[74927]: 2.4 scrub ok
Nov 24 18:24:06 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Nov 24 18:24:06 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Nov 24 18:24:06 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:24:06 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:24:06 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 107 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] r=0 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:24:06 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 107 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] r=0 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:24:06 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v234: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:24:06 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Nov 24 18:24:06 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 24 18:24:06 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.8 deep-scrub starts
Nov 24 18:24:06 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.8 deep-scrub ok
Nov 24 18:24:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Nov 24 18:24:07 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Nov 24 18:24:07 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 24 18:24:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Nov 24 18:24:07 compute-0 ceph-mon[74927]: osdmap e107: 3 total, 3 up, 3 in
Nov 24 18:24:07 compute-0 ceph-mon[74927]: pgmap v234: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:24:07 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 24 18:24:07 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Nov 24 18:24:07 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Nov 24 18:24:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:24:07 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 108 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] async=[2] r=0 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:24:08 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Nov 24 18:24:08 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Nov 24 18:24:08 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Nov 24 18:24:08 compute-0 sudo[110530]: pam_unix(sudo:session): session closed for user root
Nov 24 18:24:08 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109 pruub=15.457493782s) [2] async=[2] r=-1 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 55'385 active pruub 229.911041260s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:24:08 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109 pruub=15.457426071s) [2] r=-1 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 229.911041260s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:24:08 compute-0 ceph-mon[74927]: 3.8 deep-scrub starts
Nov 24 18:24:08 compute-0 ceph-mon[74927]: 3.8 deep-scrub ok
Nov 24 18:24:08 compute-0 ceph-mon[74927]: 2.3 scrub starts
Nov 24 18:24:08 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 24 18:24:08 compute-0 ceph-mon[74927]: 2.3 scrub ok
Nov 24 18:24:08 compute-0 ceph-mon[74927]: osdmap e108: 3 total, 3 up, 3 in
Nov 24 18:24:08 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:24:08 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:24:08 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v237: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:24:08 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 18:24:08 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 18:24:08 compute-0 sshd-session[106323]: Connection closed by 192.168.122.30 port 35388
Nov 24 18:24:08 compute-0 sshd-session[106320]: pam_unix(sshd:session): session closed for user zuul
Nov 24 18:24:08 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Nov 24 18:24:08 compute-0 systemd[1]: session-34.scope: Consumed 19.867s CPU time.
Nov 24 18:24:08 compute-0 systemd-logind[822]: Session 34 logged out. Waiting for processes to exit.
Nov 24 18:24:08 compute-0 systemd-logind[822]: Removed session 34.
Nov 24 18:24:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Nov 24 18:24:09 compute-0 ceph-mon[74927]: osdmap e109: 3 total, 3 up, 3 in
Nov 24 18:24:09 compute-0 ceph-mon[74927]: pgmap v237: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:24:09 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 18:24:09 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 18:24:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Nov 24 18:24:09 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Nov 24 18:24:09 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 110 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=109/110 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:24:10 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 18:24:10 compute-0 ceph-mon[74927]: osdmap e110: 3 total, 3 up, 3 in
Nov 24 18:24:10 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.13 deep-scrub starts
Nov 24 18:24:10 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.13 deep-scrub ok
Nov 24 18:24:10 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v239: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:24:10 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Nov 24 18:24:10 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 24 18:24:10 compute-0 sshd-session[110602]: Invalid user Administrator from 80.94.95.115 port 28078
Nov 24 18:24:10 compute-0 sshd-session[110602]: Connection closed by invalid user Administrator 80.94.95.115 port 28078 [preauth]
Nov 24 18:24:11 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Nov 24 18:24:11 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 24 18:24:11 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Nov 24 18:24:11 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Nov 24 18:24:11 compute-0 ceph-mon[74927]: 7.13 deep-scrub starts
Nov 24 18:24:11 compute-0 ceph-mon[74927]: 7.13 deep-scrub ok
Nov 24 18:24:11 compute-0 ceph-mon[74927]: pgmap v239: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:24:11 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 24 18:24:12 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 24 18:24:12 compute-0 ceph-mon[74927]: osdmap e111: 3 total, 3 up, 3 in
Nov 24 18:24:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:24:12 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v241: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:24:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Nov 24 18:24:12 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 24 18:24:13 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Nov 24 18:24:13 compute-0 ceph-mon[74927]: pgmap v241: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:24:13 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 24 18:24:13 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 24 18:24:13 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Nov 24 18:24:13 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Nov 24 18:24:13 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Nov 24 18:24:13 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Nov 24 18:24:13 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 111 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111 pruub=10.604092598s) [0] r=-1 lpr=111 pi=[86,111)/1 crt=55'385 mlcod 0'0 active pruub 216.645172119s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:24:13 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 111 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111 pruub=10.603497505s) [0] r=-1 lpr=111 pi=[86,111)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 216.645172119s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:24:13 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 111 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111) [0] r=0 lpr=111 pi=[86,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:24:14 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Nov 24 18:24:14 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Nov 24 18:24:14 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Nov 24 18:24:14 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 24 18:24:14 compute-0 ceph-mon[74927]: osdmap e112: 3 total, 3 up, 3 in
Nov 24 18:24:14 compute-0 ceph-mon[74927]: 3.7 scrub starts
Nov 24 18:24:14 compute-0 ceph-mon[74927]: 3.7 scrub ok
Nov 24 18:24:14 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 113 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:24:14 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 113 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[86,113)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:24:14 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 113 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:24:14 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 113 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[86,113)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:24:14 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v244: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 21 B/s, 1 objects/s recovering
Nov 24 18:24:14 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Nov 24 18:24:14 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 24 18:24:14 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.c deep-scrub starts
Nov 24 18:24:14 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.c deep-scrub ok
Nov 24 18:24:15 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Nov 24 18:24:15 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 24 18:24:15 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Nov 24 18:24:15 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Nov 24 18:24:15 compute-0 ceph-mon[74927]: osdmap e113: 3 total, 3 up, 3 in
Nov 24 18:24:15 compute-0 ceph-mon[74927]: pgmap v244: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 21 B/s, 1 objects/s recovering
Nov 24 18:24:15 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 24 18:24:15 compute-0 ceph-mon[74927]: 7.c deep-scrub starts
Nov 24 18:24:15 compute-0 ceph-mon[74927]: 7.c deep-scrub ok
Nov 24 18:24:15 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114 pruub=9.023878098s) [0] r=-1 lpr=114 pi=[75,114)/1 crt=55'385 mlcod 0'0 active pruub 216.589828491s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:24:15 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114 pruub=9.023796082s) [0] r=-1 lpr=114 pi=[75,114)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 216.589828491s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:24:15 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=0 lpr=114 pi=[75,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:24:15 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:24:15 compute-0 sudo[110604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:24:15 compute-0 sudo[110604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:24:15 compute-0 sudo[110604]: pam_unix(sudo:session): session closed for user root
Nov 24 18:24:15 compute-0 sudo[110629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:24:15 compute-0 sudo[110629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:24:15 compute-0 sudo[110629]: pam_unix(sudo:session): session closed for user root
Nov 24 18:24:15 compute-0 sudo[110654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:24:15 compute-0 sudo[110654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:24:15 compute-0 sudo[110654]: pam_unix(sudo:session): session closed for user root
Nov 24 18:24:15 compute-0 sudo[110679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 18:24:15 compute-0 sudo[110679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:24:16 compute-0 sudo[110679]: pam_unix(sudo:session): session closed for user root
Nov 24 18:24:16 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:24:16 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:24:16 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:24:16 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:24:16 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:24:16 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Nov 24 18:24:16 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:24:16 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 21c68669-acb9-4002-b25f-6f78cb1d900c does not exist
Nov 24 18:24:16 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 89fecb54-f04f-4492-aa56-fd00f3f17094 does not exist
Nov 24 18:24:16 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 44795734-8483-4a45-b997-6a4b1e01b443 does not exist
Nov 24 18:24:16 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 18:24:16 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:24:16 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Nov 24 18:24:16 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 24 18:24:16 compute-0 ceph-mon[74927]: osdmap e114: 3 total, 3 up, 3 in
Nov 24 18:24:16 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:24:16 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:24:16 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115 pruub=14.998288155s) [0] async=[0] r=-1 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 55'385 active pruub 223.578338623s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:24:16 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:24:16 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115 pruub=14.998158455s) [0] r=-1 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 223.578338623s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:24:16 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:24:16 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Nov 24 18:24:16 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 18:24:16 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:24:16 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:24:16 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:24:16 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 115 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:24:16 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=0 lpr=115 pi=[86,115)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:24:16 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 115 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:24:16 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=0 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:24:16 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v247: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Nov 24 18:24:16 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 24 18:24:16 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 18:24:16 compute-0 sudo[110735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:24:16 compute-0 sudo[110735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:24:16 compute-0 sudo[110735]: pam_unix(sudo:session): session closed for user root
Nov 24 18:24:16 compute-0 sudo[110760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:24:16 compute-0 sudo[110760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:24:16 compute-0 sudo[110760]: pam_unix(sudo:session): session closed for user root
Nov 24 18:24:16 compute-0 sudo[110785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:24:16 compute-0 sudo[110785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:24:16 compute-0 sudo[110785]: pam_unix(sudo:session): session closed for user root
Nov 24 18:24:16 compute-0 sudo[110810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 18:24:16 compute-0 sudo[110810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:24:17 compute-0 podman[110875]: 2025-11-24 18:24:17.020166487 +0000 UTC m=+0.038107948 container create 97ea05359d5d31fc60c9f695dcff7de9d2a07521347c5633d8eb6a0c0223fcc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_franklin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 24 18:24:17 compute-0 systemd[1]: Started libpod-conmon-97ea05359d5d31fc60c9f695dcff7de9d2a07521347c5633d8eb6a0c0223fcc5.scope.
Nov 24 18:24:17 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:24:17 compute-0 podman[110875]: 2025-11-24 18:24:17.003873662 +0000 UTC m=+0.021815143 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:24:17 compute-0 podman[110875]: 2025-11-24 18:24:17.098417831 +0000 UTC m=+0.116359312 container init 97ea05359d5d31fc60c9f695dcff7de9d2a07521347c5633d8eb6a0c0223fcc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:24:17 compute-0 podman[110875]: 2025-11-24 18:24:17.105887927 +0000 UTC m=+0.123829388 container start 97ea05359d5d31fc60c9f695dcff7de9d2a07521347c5633d8eb6a0c0223fcc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 24 18:24:17 compute-0 podman[110875]: 2025-11-24 18:24:17.108772309 +0000 UTC m=+0.126713800 container attach 97ea05359d5d31fc60c9f695dcff7de9d2a07521347c5633d8eb6a0c0223fcc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_franklin, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 24 18:24:17 compute-0 laughing_franklin[110891]: 167 167
Nov 24 18:24:17 compute-0 systemd[1]: libpod-97ea05359d5d31fc60c9f695dcff7de9d2a07521347c5633d8eb6a0c0223fcc5.scope: Deactivated successfully.
Nov 24 18:24:17 compute-0 podman[110875]: 2025-11-24 18:24:17.110599324 +0000 UTC m=+0.128540785 container died 97ea05359d5d31fc60c9f695dcff7de9d2a07521347c5633d8eb6a0c0223fcc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 24 18:24:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-6712dbea73fa464bbdc9243bcc6f1568b5772b6e754f67b865eda569db959754-merged.mount: Deactivated successfully.
Nov 24 18:24:17 compute-0 podman[110875]: 2025-11-24 18:24:17.153245694 +0000 UTC m=+0.171187155 container remove 97ea05359d5d31fc60c9f695dcff7de9d2a07521347c5633d8eb6a0c0223fcc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 18:24:17 compute-0 systemd[1]: libpod-conmon-97ea05359d5d31fc60c9f695dcff7de9d2a07521347c5633d8eb6a0c0223fcc5.scope: Deactivated successfully.
Nov 24 18:24:17 compute-0 podman[110916]: 2025-11-24 18:24:17.305083887 +0000 UTC m=+0.036646562 container create 6f50c0863af132268c180520c83d59356f28ca9af5033723b56bedc9ee7db519 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_lichterman, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 18:24:17 compute-0 systemd[1]: Started libpod-conmon-6f50c0863af132268c180520c83d59356f28ca9af5033723b56bedc9ee7db519.scope.
Nov 24 18:24:17 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:24:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1f183c0bb769f95b475054ed7f7454f88d292cefe79850a418235ab26520d90/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:24:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1f183c0bb769f95b475054ed7f7454f88d292cefe79850a418235ab26520d90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:24:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1f183c0bb769f95b475054ed7f7454f88d292cefe79850a418235ab26520d90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:24:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1f183c0bb769f95b475054ed7f7454f88d292cefe79850a418235ab26520d90/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:24:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1f183c0bb769f95b475054ed7f7454f88d292cefe79850a418235ab26520d90/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:24:17 compute-0 podman[110916]: 2025-11-24 18:24:17.287875639 +0000 UTC m=+0.019438334 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:24:17 compute-0 podman[110916]: 2025-11-24 18:24:17.387946916 +0000 UTC m=+0.119509621 container init 6f50c0863af132268c180520c83d59356f28ca9af5033723b56bedc9ee7db519 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:24:17 compute-0 podman[110916]: 2025-11-24 18:24:17.395148215 +0000 UTC m=+0.126710890 container start 6f50c0863af132268c180520c83d59356f28ca9af5033723b56bedc9ee7db519 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_lichterman, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:24:17 compute-0 podman[110916]: 2025-11-24 18:24:17.398252742 +0000 UTC m=+0.129815437 container attach 6f50c0863af132268c180520c83d59356f28ca9af5033723b56bedc9ee7db519 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:24:17 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.a scrub starts
Nov 24 18:24:17 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.a scrub ok
Nov 24 18:24:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Nov 24 18:24:17 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 18:24:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Nov 24 18:24:17 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Nov 24 18:24:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:24:17 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:24:17 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:24:17 compute-0 ceph-mon[74927]: osdmap e115: 3 total, 3 up, 3 in
Nov 24 18:24:17 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:24:17 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:24:17 compute-0 ceph-mon[74927]: pgmap v247: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Nov 24 18:24:17 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 18:24:17 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 116 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=0 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:24:17 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116 pruub=8.010222435s) [1] r=-1 lpr=116 pi=[76,116)/1 crt=55'385 mlcod 0'0 active pruub 217.601470947s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:24:17 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116 pruub=8.010166168s) [1] r=-1 lpr=116 pi=[76,116)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 217.601470947s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:24:17 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:24:17 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=0 lpr=116 pi=[76,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:24:17 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.e scrub starts
Nov 24 18:24:17 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.e scrub ok
Nov 24 18:24:18 compute-0 loving_lichterman[110933]: --> passed data devices: 0 physical, 3 LVM
Nov 24 18:24:18 compute-0 loving_lichterman[110933]: --> relative data size: 1.0
Nov 24 18:24:18 compute-0 loving_lichterman[110933]: --> All data devices are unavailable
Nov 24 18:24:18 compute-0 systemd[1]: libpod-6f50c0863af132268c180520c83d59356f28ca9af5033723b56bedc9ee7db519.scope: Deactivated successfully.
Nov 24 18:24:18 compute-0 podman[110916]: 2025-11-24 18:24:18.363890817 +0000 UTC m=+1.095453492 container died 6f50c0863af132268c180520c83d59356f28ca9af5033723b56bedc9ee7db519 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_lichterman, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:24:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1f183c0bb769f95b475054ed7f7454f88d292cefe79850a418235ab26520d90-merged.mount: Deactivated successfully.
Nov 24 18:24:18 compute-0 podman[110916]: 2025-11-24 18:24:18.415298735 +0000 UTC m=+1.146861410 container remove 6f50c0863af132268c180520c83d59356f28ca9af5033723b56bedc9ee7db519 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:24:18 compute-0 systemd[1]: libpod-conmon-6f50c0863af132268c180520c83d59356f28ca9af5033723b56bedc9ee7db519.scope: Deactivated successfully.
Nov 24 18:24:18 compute-0 sudo[110810]: pam_unix(sudo:session): session closed for user root
Nov 24 18:24:18 compute-0 sudo[110974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:24:18 compute-0 sudo[110974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:24:18 compute-0 sudo[110974]: pam_unix(sudo:session): session closed for user root
Nov 24 18:24:18 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Nov 24 18:24:18 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Nov 24 18:24:18 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Nov 24 18:24:18 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:24:18 compute-0 ceph-mon[74927]: 3.a scrub starts
Nov 24 18:24:18 compute-0 ceph-mon[74927]: 3.a scrub ok
Nov 24 18:24:18 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 18:24:18 compute-0 ceph-mon[74927]: osdmap e116: 3 total, 3 up, 3 in
Nov 24 18:24:18 compute-0 ceph-mon[74927]: 7.e scrub starts
Nov 24 18:24:18 compute-0 ceph-mon[74927]: 7.e scrub ok
Nov 24 18:24:18 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:24:18 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117 pruub=14.991624832s) [0] async=[0] r=-1 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 55'385 active pruub 225.596588135s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:24:18 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117 pruub=14.991396904s) [0] r=-1 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 225.596588135s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:24:18 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:24:18 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:24:18 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v250: 321 pgs: 1 active+remapped, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Nov 24 18:24:18 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:24:18 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:24:18 compute-0 sudo[110999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:24:18 compute-0 sudo[110999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:24:18 compute-0 sudo[110999]: pam_unix(sudo:session): session closed for user root
Nov 24 18:24:18 compute-0 sudo[111024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:24:18 compute-0 sudo[111024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:24:18 compute-0 sudo[111024]: pam_unix(sudo:session): session closed for user root
Nov 24 18:24:18 compute-0 sudo[111049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- lvm list --format json
Nov 24 18:24:18 compute-0 sudo[111049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:24:19 compute-0 podman[111114]: 2025-11-24 18:24:19.01152592 +0000 UTC m=+0.083128656 container create ca2eec5fb90d0e283b3d1a68cdb1dbcfb5a1cde838bcf4694b4b7102e7d89188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_thompson, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 18:24:19 compute-0 podman[111114]: 2025-11-24 18:24:18.951660823 +0000 UTC m=+0.023263579 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:24:19 compute-0 systemd[1]: Started libpod-conmon-ca2eec5fb90d0e283b3d1a68cdb1dbcfb5a1cde838bcf4694b4b7102e7d89188.scope.
Nov 24 18:24:19 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:24:19 compute-0 podman[111114]: 2025-11-24 18:24:19.094633715 +0000 UTC m=+0.166236481 container init ca2eec5fb90d0e283b3d1a68cdb1dbcfb5a1cde838bcf4694b4b7102e7d89188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_thompson, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Nov 24 18:24:19 compute-0 podman[111114]: 2025-11-24 18:24:19.101941237 +0000 UTC m=+0.173543973 container start ca2eec5fb90d0e283b3d1a68cdb1dbcfb5a1cde838bcf4694b4b7102e7d89188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_thompson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 18:24:19 compute-0 podman[111114]: 2025-11-24 18:24:19.104989323 +0000 UTC m=+0.176592089 container attach ca2eec5fb90d0e283b3d1a68cdb1dbcfb5a1cde838bcf4694b4b7102e7d89188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 18:24:19 compute-0 lucid_thompson[111130]: 167 167
Nov 24 18:24:19 compute-0 systemd[1]: libpod-ca2eec5fb90d0e283b3d1a68cdb1dbcfb5a1cde838bcf4694b4b7102e7d89188.scope: Deactivated successfully.
Nov 24 18:24:19 compute-0 podman[111114]: 2025-11-24 18:24:19.107784922 +0000 UTC m=+0.179387708 container died ca2eec5fb90d0e283b3d1a68cdb1dbcfb5a1cde838bcf4694b4b7102e7d89188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_thompson, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Nov 24 18:24:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5887d4f16e36538c64fc7d072ff42d42888efd586ec7404e3884c2eb1a70970-merged.mount: Deactivated successfully.
Nov 24 18:24:19 compute-0 podman[111114]: 2025-11-24 18:24:19.146186436 +0000 UTC m=+0.217789172 container remove ca2eec5fb90d0e283b3d1a68cdb1dbcfb5a1cde838bcf4694b4b7102e7d89188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:24:19 compute-0 systemd[1]: libpod-conmon-ca2eec5fb90d0e283b3d1a68cdb1dbcfb5a1cde838bcf4694b4b7102e7d89188.scope: Deactivated successfully.
Nov 24 18:24:19 compute-0 podman[111154]: 2025-11-24 18:24:19.291589419 +0000 UTC m=+0.045784338 container create 30eef09e1aaf461606eb4a91c2fcd939c2c820be634b15f0c643d9d1ee7c82e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wozniak, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 24 18:24:19 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.1b deep-scrub starts
Nov 24 18:24:19 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.1b deep-scrub ok
Nov 24 18:24:19 compute-0 systemd[1]: Started libpod-conmon-30eef09e1aaf461606eb4a91c2fcd939c2c820be634b15f0c643d9d1ee7c82e3.scope.
Nov 24 18:24:19 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:24:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60bb876d567beb8bc9a3acd25da2580caffb2404522d168ccca5fef2ecd9863a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:24:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60bb876d567beb8bc9a3acd25da2580caffb2404522d168ccca5fef2ecd9863a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:24:19 compute-0 podman[111154]: 2025-11-24 18:24:19.272208038 +0000 UTC m=+0.026402967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:24:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60bb876d567beb8bc9a3acd25da2580caffb2404522d168ccca5fef2ecd9863a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:24:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60bb876d567beb8bc9a3acd25da2580caffb2404522d168ccca5fef2ecd9863a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:24:19 compute-0 podman[111154]: 2025-11-24 18:24:19.383474153 +0000 UTC m=+0.137669092 container init 30eef09e1aaf461606eb4a91c2fcd939c2c820be634b15f0c643d9d1ee7c82e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 18:24:19 compute-0 podman[111154]: 2025-11-24 18:24:19.391097922 +0000 UTC m=+0.145292831 container start 30eef09e1aaf461606eb4a91c2fcd939c2c820be634b15f0c643d9d1ee7c82e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 24 18:24:19 compute-0 podman[111154]: 2025-11-24 18:24:19.393957743 +0000 UTC m=+0.148152662 container attach 30eef09e1aaf461606eb4a91c2fcd939c2c820be634b15f0c643d9d1ee7c82e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wozniak, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:24:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Nov 24 18:24:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Nov 24 18:24:19 compute-0 ceph-mon[74927]: osdmap e117: 3 total, 3 up, 3 in
Nov 24 18:24:19 compute-0 ceph-mon[74927]: pgmap v250: 321 pgs: 1 active+remapped, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Nov 24 18:24:19 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Nov 24 18:24:19 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 118 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:24:20 compute-0 elated_wozniak[111170]: {
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:     "0": [
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:         {
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:             "devices": [
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:                 "/dev/loop3"
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:             ],
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:             "lv_name": "ceph_lv0",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:             "lv_size": "21470642176",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:             "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:             "name": "ceph_lv0",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:             "tags": {
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:                 "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:                 "ceph.cluster_name": "ceph",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:                 "ceph.crush_device_class": "",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:                 "ceph.encrypted": "0",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:                 "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:                 "ceph.osd_id": "0",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:                 "ceph.type": "block",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:                 "ceph.vdo": "0"
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:             },
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:             "type": "block",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:             "vg_name": "ceph_vg0"
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:         }
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:     ],
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:     "1": [
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:         {
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:             "devices": [
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:                 "/dev/loop4"
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:             ],
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:             "lv_name": "ceph_lv1",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:             "lv_size": "21470642176",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:             "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:             "name": "ceph_lv1",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:             "tags": {
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:                 "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:                 "ceph.cluster_name": "ceph",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:                 "ceph.crush_device_class": "",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:                 "ceph.encrypted": "0",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:                 "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:                 "ceph.osd_id": "1",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:                 "ceph.type": "block",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:                 "ceph.vdo": "0"
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:             },
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:             "type": "block",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:             "vg_name": "ceph_vg1"
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:         }
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:     ],
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:     "2": [
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:         {
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:             "devices": [
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:                 "/dev/loop5"
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:             ],
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:             "lv_name": "ceph_lv2",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:             "lv_size": "21470642176",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:             "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:             "name": "ceph_lv2",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:             "tags": {
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:                 "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:                 "ceph.cluster_name": "ceph",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:                 "ceph.crush_device_class": "",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:                 "ceph.encrypted": "0",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:                 "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:                 "ceph.osd_id": "2",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:                 "ceph.type": "block",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:                 "ceph.vdo": "0"
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:             },
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:             "type": "block",
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:             "vg_name": "ceph_vg2"
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:         }
Nov 24 18:24:20 compute-0 elated_wozniak[111170]:     ]
Nov 24 18:24:20 compute-0 elated_wozniak[111170]: }
Nov 24 18:24:20 compute-0 systemd[1]: libpod-30eef09e1aaf461606eb4a91c2fcd939c2c820be634b15f0c643d9d1ee7c82e3.scope: Deactivated successfully.
Nov 24 18:24:20 compute-0 podman[111154]: 2025-11-24 18:24:20.16692746 +0000 UTC m=+0.921122369 container died 30eef09e1aaf461606eb4a91c2fcd939c2c820be634b15f0c643d9d1ee7c82e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wozniak, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 24 18:24:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-60bb876d567beb8bc9a3acd25da2580caffb2404522d168ccca5fef2ecd9863a-merged.mount: Deactivated successfully.
Nov 24 18:24:20 compute-0 podman[111154]: 2025-11-24 18:24:20.219729542 +0000 UTC m=+0.973924461 container remove 30eef09e1aaf461606eb4a91c2fcd939c2c820be634b15f0c643d9d1ee7c82e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:24:20 compute-0 systemd[1]: libpod-conmon-30eef09e1aaf461606eb4a91c2fcd939c2c820be634b15f0c643d9d1ee7c82e3.scope: Deactivated successfully.
Nov 24 18:24:20 compute-0 sudo[111049]: pam_unix(sudo:session): session closed for user root
Nov 24 18:24:20 compute-0 sudo[111190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:24:20 compute-0 sudo[111190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:24:20 compute-0 sudo[111190]: pam_unix(sudo:session): session closed for user root
Nov 24 18:24:20 compute-0 sudo[111215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:24:20 compute-0 sudo[111215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:24:20 compute-0 sudo[111215]: pam_unix(sudo:session): session closed for user root
Nov 24 18:24:20 compute-0 sudo[111240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:24:20 compute-0 sudo[111240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:24:20 compute-0 sudo[111240]: pam_unix(sudo:session): session closed for user root
Nov 24 18:24:20 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:24:20 compute-0 sudo[111265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- raw list --format json
Nov 24 18:24:20 compute-0 sudo[111265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:24:20 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v252: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 2 objects/s recovering
Nov 24 18:24:20 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Nov 24 18:24:20 compute-0 ceph-mon[74927]: 3.1b deep-scrub starts
Nov 24 18:24:20 compute-0 ceph-mon[74927]: 3.1b deep-scrub ok
Nov 24 18:24:20 compute-0 ceph-mon[74927]: osdmap e118: 3 total, 3 up, 3 in
Nov 24 18:24:20 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Nov 24 18:24:20 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Nov 24 18:24:20 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119 pruub=15.899600029s) [1] async=[1] r=-1 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 55'385 active pruub 228.537811279s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:24:20 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119 pruub=15.899522781s) [1] r=-1 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 228.537811279s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:24:20 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:24:20 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:24:20 compute-0 podman[111328]: 2025-11-24 18:24:20.840409185 +0000 UTC m=+0.056727321 container create 9778e63ef33f23b43df4d52d58196c45d78484f52ec30f3ef03d8ac180b07a5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chatelet, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 18:24:20 compute-0 systemd[1]: Started libpod-conmon-9778e63ef33f23b43df4d52d58196c45d78484f52ec30f3ef03d8ac180b07a5a.scope.
Nov 24 18:24:20 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:24:20 compute-0 podman[111328]: 2025-11-24 18:24:20.819437774 +0000 UTC m=+0.035756000 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:24:20 compute-0 podman[111328]: 2025-11-24 18:24:20.92192742 +0000 UTC m=+0.138245576 container init 9778e63ef33f23b43df4d52d58196c45d78484f52ec30f3ef03d8ac180b07a5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:24:20 compute-0 podman[111328]: 2025-11-24 18:24:20.928267058 +0000 UTC m=+0.144585194 container start 9778e63ef33f23b43df4d52d58196c45d78484f52ec30f3ef03d8ac180b07a5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:24:20 compute-0 podman[111328]: 2025-11-24 18:24:20.931458167 +0000 UTC m=+0.147776303 container attach 9778e63ef33f23b43df4d52d58196c45d78484f52ec30f3ef03d8ac180b07a5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 18:24:20 compute-0 brave_chatelet[111344]: 167 167
Nov 24 18:24:20 compute-0 systemd[1]: libpod-9778e63ef33f23b43df4d52d58196c45d78484f52ec30f3ef03d8ac180b07a5a.scope: Deactivated successfully.
Nov 24 18:24:20 compute-0 podman[111328]: 2025-11-24 18:24:20.933704643 +0000 UTC m=+0.150022779 container died 9778e63ef33f23b43df4d52d58196c45d78484f52ec30f3ef03d8ac180b07a5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chatelet, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:24:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-c51dabc286fae2b8da0d9c850f82fe72312d48b97db85a898b492dd87e347bdb-merged.mount: Deactivated successfully.
Nov 24 18:24:20 compute-0 podman[111328]: 2025-11-24 18:24:20.977298036 +0000 UTC m=+0.193616172 container remove 9778e63ef33f23b43df4d52d58196c45d78484f52ec30f3ef03d8ac180b07a5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chatelet, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 18:24:20 compute-0 systemd[1]: libpod-conmon-9778e63ef33f23b43df4d52d58196c45d78484f52ec30f3ef03d8ac180b07a5a.scope: Deactivated successfully.
Nov 24 18:24:21 compute-0 podman[111369]: 2025-11-24 18:24:21.120013503 +0000 UTC m=+0.036474388 container create ecadfb7af729e145628ab0ff36d8ff8283946de8ae0a38c553311c334b4e57f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_khorana, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 18:24:21 compute-0 systemd[1]: Started libpod-conmon-ecadfb7af729e145628ab0ff36d8ff8283946de8ae0a38c553311c334b4e57f5.scope.
Nov 24 18:24:21 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:24:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34b1457558d6c5232fa526cac33a432744b172ec46785763e261b40840e79d9c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:24:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34b1457558d6c5232fa526cac33a432744b172ec46785763e261b40840e79d9c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:24:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34b1457558d6c5232fa526cac33a432744b172ec46785763e261b40840e79d9c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:24:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34b1457558d6c5232fa526cac33a432744b172ec46785763e261b40840e79d9c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:24:21 compute-0 podman[111369]: 2025-11-24 18:24:21.178749372 +0000 UTC m=+0.095210287 container init ecadfb7af729e145628ab0ff36d8ff8283946de8ae0a38c553311c334b4e57f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_khorana, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:24:21 compute-0 podman[111369]: 2025-11-24 18:24:21.186248688 +0000 UTC m=+0.102709583 container start ecadfb7af729e145628ab0ff36d8ff8283946de8ae0a38c553311c334b4e57f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Nov 24 18:24:21 compute-0 podman[111369]: 2025-11-24 18:24:21.190160806 +0000 UTC m=+0.106621721 container attach ecadfb7af729e145628ab0ff36d8ff8283946de8ae0a38c553311c334b4e57f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_khorana, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:24:21 compute-0 podman[111369]: 2025-11-24 18:24:21.102232891 +0000 UTC m=+0.018693806 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:24:21 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Nov 24 18:24:21 compute-0 ceph-mon[74927]: pgmap v252: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 2 objects/s recovering
Nov 24 18:24:21 compute-0 ceph-mon[74927]: osdmap e119: 3 total, 3 up, 3 in
Nov 24 18:24:21 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Nov 24 18:24:21 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Nov 24 18:24:21 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 120 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=119/120 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:24:22 compute-0 keen_khorana[111385]: {
Nov 24 18:24:22 compute-0 keen_khorana[111385]:     "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 18:24:22 compute-0 keen_khorana[111385]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:24:22 compute-0 keen_khorana[111385]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 18:24:22 compute-0 keen_khorana[111385]:         "osd_id": 0,
Nov 24 18:24:22 compute-0 keen_khorana[111385]:         "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:24:22 compute-0 keen_khorana[111385]:         "type": "bluestore"
Nov 24 18:24:22 compute-0 keen_khorana[111385]:     },
Nov 24 18:24:22 compute-0 keen_khorana[111385]:     "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 18:24:22 compute-0 keen_khorana[111385]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:24:22 compute-0 keen_khorana[111385]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 18:24:22 compute-0 keen_khorana[111385]:         "osd_id": 1,
Nov 24 18:24:22 compute-0 keen_khorana[111385]:         "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:24:22 compute-0 keen_khorana[111385]:         "type": "bluestore"
Nov 24 18:24:22 compute-0 keen_khorana[111385]:     },
Nov 24 18:24:22 compute-0 keen_khorana[111385]:     "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 18:24:22 compute-0 keen_khorana[111385]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:24:22 compute-0 keen_khorana[111385]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 18:24:22 compute-0 keen_khorana[111385]:         "osd_id": 2,
Nov 24 18:24:22 compute-0 keen_khorana[111385]:         "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:24:22 compute-0 keen_khorana[111385]:         "type": "bluestore"
Nov 24 18:24:22 compute-0 keen_khorana[111385]:     }
Nov 24 18:24:22 compute-0 keen_khorana[111385]: }
Nov 24 18:24:22 compute-0 systemd[1]: libpod-ecadfb7af729e145628ab0ff36d8ff8283946de8ae0a38c553311c334b4e57f5.scope: Deactivated successfully.
Nov 24 18:24:22 compute-0 podman[111418]: 2025-11-24 18:24:22.163728188 +0000 UTC m=+0.019307041 container died ecadfb7af729e145628ab0ff36d8ff8283946de8ae0a38c553311c334b4e57f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 18:24:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-34b1457558d6c5232fa526cac33a432744b172ec46785763e261b40840e79d9c-merged.mount: Deactivated successfully.
Nov 24 18:24:22 compute-0 podman[111418]: 2025-11-24 18:24:22.219725979 +0000 UTC m=+0.075304842 container remove ecadfb7af729e145628ab0ff36d8ff8283946de8ae0a38c553311c334b4e57f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:24:22 compute-0 systemd[1]: libpod-conmon-ecadfb7af729e145628ab0ff36d8ff8283946de8ae0a38c553311c334b4e57f5.scope: Deactivated successfully.
Nov 24 18:24:22 compute-0 sudo[111265]: pam_unix(sudo:session): session closed for user root
Nov 24 18:24:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:24:22 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:24:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:24:22 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:24:22 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 319a3bcf-1804-4c9c-abdd-eced4dd8ff49 does not exist
Nov 24 18:24:22 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev bfc609cd-fe3b-4acf-a683-3035893c7017 does not exist
Nov 24 18:24:22 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.9 deep-scrub starts
Nov 24 18:24:22 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.9 deep-scrub ok
Nov 24 18:24:22 compute-0 sudo[111433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:24:22 compute-0 sudo[111433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:24:22 compute-0 sudo[111433]: pam_unix(sudo:session): session closed for user root
Nov 24 18:24:22 compute-0 sudo[111458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:24:22 compute-0 sudo[111458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:24:22 compute-0 sudo[111458]: pam_unix(sudo:session): session closed for user root
Nov 24 18:24:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:24:22 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Nov 24 18:24:22 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Nov 24 18:24:22 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v255: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Nov 24 18:24:22 compute-0 ceph-mon[74927]: osdmap e120: 3 total, 3 up, 3 in
Nov 24 18:24:22 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:24:22 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:24:23 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Nov 24 18:24:23 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Nov 24 18:24:23 compute-0 ceph-mon[74927]: 7.9 deep-scrub starts
Nov 24 18:24:23 compute-0 ceph-mon[74927]: 7.9 deep-scrub ok
Nov 24 18:24:23 compute-0 ceph-mon[74927]: 2.5 scrub starts
Nov 24 18:24:23 compute-0 ceph-mon[74927]: 2.5 scrub ok
Nov 24 18:24:23 compute-0 ceph-mon[74927]: pgmap v255: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Nov 24 18:24:24 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Nov 24 18:24:24 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Nov 24 18:24:24 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v256: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 1 objects/s recovering
Nov 24 18:24:24 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.a scrub starts
Nov 24 18:24:24 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.a scrub ok
Nov 24 18:24:24 compute-0 ceph-mon[74927]: 7.1b scrub starts
Nov 24 18:24:24 compute-0 ceph-mon[74927]: 7.1b scrub ok
Nov 24 18:24:25 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Nov 24 18:24:25 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Nov 24 18:24:25 compute-0 ceph-mon[74927]: 7.18 scrub starts
Nov 24 18:24:25 compute-0 ceph-mon[74927]: 7.18 scrub ok
Nov 24 18:24:25 compute-0 ceph-mon[74927]: pgmap v256: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 1 objects/s recovering
Nov 24 18:24:25 compute-0 ceph-mon[74927]: 2.a scrub starts
Nov 24 18:24:25 compute-0 ceph-mon[74927]: 2.a scrub ok
Nov 24 18:24:26 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Nov 24 18:24:26 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Nov 24 18:24:26 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v257: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 0 objects/s recovering
Nov 24 18:24:26 compute-0 ceph-mon[74927]: 3.1d scrub starts
Nov 24 18:24:26 compute-0 ceph-mon[74927]: 3.1d scrub ok
Nov 24 18:24:27 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.d scrub starts
Nov 24 18:24:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:24:27 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.d scrub ok
Nov 24 18:24:27 compute-0 ceph-mon[74927]: 7.1f scrub starts
Nov 24 18:24:27 compute-0 ceph-mon[74927]: 7.1f scrub ok
Nov 24 18:24:27 compute-0 ceph-mon[74927]: pgmap v257: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 0 objects/s recovering
Nov 24 18:24:28 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v258: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Nov 24 18:24:28 compute-0 ceph-mon[74927]: 2.d scrub starts
Nov 24 18:24:28 compute-0 ceph-mon[74927]: 2.d scrub ok
Nov 24 18:24:29 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Nov 24 18:24:29 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Nov 24 18:24:29 compute-0 ceph-mon[74927]: pgmap v258: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Nov 24 18:24:30 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v259: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 11 B/s, 0 objects/s recovering
Nov 24 18:24:30 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Nov 24 18:24:30 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Nov 24 18:24:30 compute-0 ceph-mon[74927]: 3.1f scrub starts
Nov 24 18:24:30 compute-0 ceph-mon[74927]: 3.1f scrub ok
Nov 24 18:24:31 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Nov 24 18:24:31 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Nov 24 18:24:31 compute-0 sshd-session[111483]: Accepted publickey for zuul from 192.168.122.30 port 45034 ssh2: ECDSA SHA256:jrbRhTO0vQ5011EeK1ZbrK2vK+fcQyIDL9kUBqDHBYY
Nov 24 18:24:31 compute-0 systemd-logind[822]: New session 35 of user zuul.
Nov 24 18:24:31 compute-0 systemd[1]: Started Session 35 of User zuul.
Nov 24 18:24:31 compute-0 sshd-session[111483]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 18:24:31 compute-0 ceph-mon[74927]: pgmap v259: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 11 B/s, 0 objects/s recovering
Nov 24 18:24:31 compute-0 ceph-mon[74927]: 7.1a scrub starts
Nov 24 18:24:31 compute-0 ceph-mon[74927]: 7.1a scrub ok
Nov 24 18:24:32 compute-0 python3.9[111636]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 24 18:24:32 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Nov 24 18:24:32 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Nov 24 18:24:32 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Nov 24 18:24:32 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Nov 24 18:24:32 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:24:32 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v260: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Nov 24 18:24:32 compute-0 ceph-mon[74927]: 3.9 scrub starts
Nov 24 18:24:32 compute-0 ceph-mon[74927]: 3.9 scrub ok
Nov 24 18:24:32 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Nov 24 18:24:32 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Nov 24 18:24:33 compute-0 python3.9[111810]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:24:33 compute-0 ceph-mon[74927]: 3.15 scrub starts
Nov 24 18:24:33 compute-0 ceph-mon[74927]: 3.15 scrub ok
Nov 24 18:24:33 compute-0 ceph-mon[74927]: 5.9 scrub starts
Nov 24 18:24:33 compute-0 ceph-mon[74927]: 5.9 scrub ok
Nov 24 18:24:33 compute-0 ceph-mon[74927]: pgmap v260: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Nov 24 18:24:33 compute-0 ceph-mon[74927]: 3.1e scrub starts
Nov 24 18:24:33 compute-0 ceph-mon[74927]: 3.1e scrub ok
Nov 24 18:24:34 compute-0 sudo[111964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpbjusfyidebywqlefneuzqtcbnrvqyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008673.7858064-45-210537515663927/AnsiballZ_command.py'
Nov 24 18:24:34 compute-0 sudo[111964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:24:34 compute-0 python3.9[111966]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:24:34 compute-0 sudo[111964]: pam_unix(sudo:session): session closed for user root
Nov 24 18:24:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:24:34
Nov 24 18:24:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:24:34 compute-0 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 18:24:34 compute-0 ceph-mgr[75218]: [balancer INFO root] pools ['default.rgw.log', 'images', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control']
Nov 24 18:24:34 compute-0 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 18:24:34 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v261: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Nov 24 18:24:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:24:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:24:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:24:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:24:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:24:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:24:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:24:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:24:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:24:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:24:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:24:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:24:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:24:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:24:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:24:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:24:34 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Nov 24 18:24:34 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Nov 24 18:24:34 compute-0 rsyslogd[1008]: imjournal from <np0005533938:ceph-mgr>: begin to drop messages due to rate-limiting
Nov 24 18:24:35 compute-0 sudo[112117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eftktanunpmnzgbuirncdvatdcqjcrsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008674.687958-57-236589425076604/AnsiballZ_stat.py'
Nov 24 18:24:35 compute-0 sudo[112117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:24:35 compute-0 python3.9[112119]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:24:35 compute-0 sudo[112117]: pam_unix(sudo:session): session closed for user root
Nov 24 18:24:35 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Nov 24 18:24:35 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Nov 24 18:24:35 compute-0 ceph-mon[74927]: pgmap v261: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Nov 24 18:24:35 compute-0 ceph-mon[74927]: 10.3 scrub starts
Nov 24 18:24:35 compute-0 ceph-mon[74927]: 10.3 scrub ok
Nov 24 18:24:35 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.5 deep-scrub starts
Nov 24 18:24:35 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.5 deep-scrub ok
Nov 24 18:24:35 compute-0 sudo[112271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zffwhiijxxwjmzbmxznnewgictnnpsup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008675.4495628-68-277247136837548/AnsiballZ_file.py'
Nov 24 18:24:35 compute-0 sudo[112271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:24:36 compute-0 python3.9[112273]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:24:36 compute-0 sudo[112271]: pam_unix(sudo:session): session closed for user root
Nov 24 18:24:36 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v262: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:24:36 compute-0 sudo[112423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeaxzxelnatytxcqlawihoaoycchayzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008676.2315617-77-117641701395852/AnsiballZ_file.py'
Nov 24 18:24:36 compute-0 sudo[112423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:24:36 compute-0 ceph-mon[74927]: 3.12 scrub starts
Nov 24 18:24:36 compute-0 ceph-mon[74927]: 3.12 scrub ok
Nov 24 18:24:36 compute-0 ceph-mon[74927]: 10.5 deep-scrub starts
Nov 24 18:24:36 compute-0 ceph-mon[74927]: 10.5 deep-scrub ok
Nov 24 18:24:36 compute-0 python3.9[112425]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:24:36 compute-0 sudo[112423]: pam_unix(sudo:session): session closed for user root
Nov 24 18:24:37 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Nov 24 18:24:37 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Nov 24 18:24:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:24:37 compute-0 python3.9[112575]: ansible-ansible.builtin.service_facts Invoked
Nov 24 18:24:37 compute-0 ceph-mon[74927]: pgmap v262: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:24:37 compute-0 network[112592]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 18:24:37 compute-0 network[112593]: 'network-scripts' will be removed from distribution in near future.
Nov 24 18:24:37 compute-0 network[112594]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 18:24:37 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.a scrub starts
Nov 24 18:24:37 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.a scrub ok
Nov 24 18:24:38 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.15 deep-scrub starts
Nov 24 18:24:38 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.15 deep-scrub ok
Nov 24 18:24:38 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v263: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:24:38 compute-0 ceph-mon[74927]: 11.10 scrub starts
Nov 24 18:24:38 compute-0 ceph-mon[74927]: 11.10 scrub ok
Nov 24 18:24:38 compute-0 ceph-mon[74927]: 10.a scrub starts
Nov 24 18:24:38 compute-0 ceph-mon[74927]: 10.a scrub ok
Nov 24 18:24:39 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Nov 24 18:24:39 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Nov 24 18:24:39 compute-0 ceph-mon[74927]: 2.15 deep-scrub starts
Nov 24 18:24:39 compute-0 ceph-mon[74927]: 2.15 deep-scrub ok
Nov 24 18:24:39 compute-0 ceph-mon[74927]: pgmap v263: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:24:39 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.c scrub starts
Nov 24 18:24:39 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.c scrub ok
Nov 24 18:24:40 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v264: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:24:40 compute-0 ceph-mon[74927]: 8.10 scrub starts
Nov 24 18:24:40 compute-0 ceph-mon[74927]: 8.10 scrub ok
Nov 24 18:24:40 compute-0 ceph-mon[74927]: 10.c scrub starts
Nov 24 18:24:40 compute-0 ceph-mon[74927]: 10.c scrub ok
Nov 24 18:24:41 compute-0 python3.9[112854]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:24:41 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Nov 24 18:24:41 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Nov 24 18:24:41 compute-0 ceph-mon[74927]: pgmap v264: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:24:41 compute-0 systemd[76548]: Created slice User Background Tasks Slice.
Nov 24 18:24:41 compute-0 python3.9[113004]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:24:41 compute-0 systemd[76548]: Starting Cleanup of User's Temporary Files and Directories...
Nov 24 18:24:41 compute-0 systemd[76548]: Finished Cleanup of User's Temporary Files and Directories.
Nov 24 18:24:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:24:42 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v265: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:24:42 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Nov 24 18:24:42 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Nov 24 18:24:42 compute-0 ceph-mon[74927]: 10.18 scrub starts
Nov 24 18:24:42 compute-0 ceph-mon[74927]: 10.18 scrub ok
Nov 24 18:24:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:24:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:24:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:24:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:24:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:24:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:24:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:24:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:24:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:24:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:24:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:24:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:24:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 18:24:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:24:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:24:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:24:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 18:24:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:24:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 18:24:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:24:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:24:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:24:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 18:24:43 compute-0 python3.9[113159]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:24:43 compute-0 sudo[113315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dapncsrwrhbkithrxyqyknithhxtvdaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008683.3852499-125-48313373023020/AnsiballZ_setup.py'
Nov 24 18:24:43 compute-0 sudo[113315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:24:43 compute-0 ceph-mon[74927]: pgmap v265: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:24:43 compute-0 ceph-mon[74927]: 10.1b scrub starts
Nov 24 18:24:43 compute-0 ceph-mon[74927]: 10.1b scrub ok
Nov 24 18:24:43 compute-0 python3.9[113317]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 18:24:44 compute-0 sudo[113315]: pam_unix(sudo:session): session closed for user root
Nov 24 18:24:44 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.b scrub starts
Nov 24 18:24:44 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.b scrub ok
Nov 24 18:24:44 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Nov 24 18:24:44 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Nov 24 18:24:44 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v266: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:24:44 compute-0 sudo[113399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbtbtrlxssfuyhcpihqeppxqdtitpcvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008683.3852499-125-48313373023020/AnsiballZ_dnf.py'
Nov 24 18:24:44 compute-0 sudo[113399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:24:44 compute-0 python3.9[113401]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 18:24:45 compute-0 ceph-mon[74927]: 8.b scrub starts
Nov 24 18:24:45 compute-0 ceph-mon[74927]: 8.b scrub ok
Nov 24 18:24:45 compute-0 ceph-mon[74927]: 5.13 scrub starts
Nov 24 18:24:45 compute-0 ceph-mon[74927]: 5.13 scrub ok
Nov 24 18:24:45 compute-0 ceph-mon[74927]: pgmap v266: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:24:46 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Nov 24 18:24:46 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Nov 24 18:24:46 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v267: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:24:46 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Nov 24 18:24:46 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Nov 24 18:24:47 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.17 deep-scrub starts
Nov 24 18:24:47 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.17 deep-scrub ok
Nov 24 18:24:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:24:47 compute-0 ceph-mon[74927]: 11.4 scrub starts
Nov 24 18:24:47 compute-0 ceph-mon[74927]: 11.4 scrub ok
Nov 24 18:24:47 compute-0 ceph-mon[74927]: pgmap v267: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:24:47 compute-0 ceph-mon[74927]: 10.1c scrub starts
Nov 24 18:24:47 compute-0 ceph-mon[74927]: 10.1c scrub ok
Nov 24 18:24:47 compute-0 ceph-mon[74927]: 2.17 deep-scrub starts
Nov 24 18:24:48 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v268: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:24:48 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Nov 24 18:24:48 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Nov 24 18:24:48 compute-0 ceph-mon[74927]: 2.17 deep-scrub ok
Nov 24 18:24:49 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Nov 24 18:24:49 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Nov 24 18:24:49 compute-0 ceph-mon[74927]: pgmap v268: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:24:49 compute-0 ceph-mon[74927]: 10.1d scrub starts
Nov 24 18:24:49 compute-0 ceph-mon[74927]: 10.1d scrub ok
Nov 24 18:24:50 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Nov 24 18:24:50 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Nov 24 18:24:50 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v269: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:24:50 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Nov 24 18:24:50 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Nov 24 18:24:50 compute-0 ceph-mon[74927]: 8.6 scrub starts
Nov 24 18:24:50 compute-0 ceph-mon[74927]: 8.6 scrub ok
Nov 24 18:24:51 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Nov 24 18:24:51 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Nov 24 18:24:51 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Nov 24 18:24:51 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Nov 24 18:24:51 compute-0 ceph-mon[74927]: 11.14 scrub starts
Nov 24 18:24:51 compute-0 ceph-mon[74927]: 11.14 scrub ok
Nov 24 18:24:51 compute-0 ceph-mon[74927]: pgmap v269: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:24:51 compute-0 ceph-mon[74927]: 10.1f scrub starts
Nov 24 18:24:51 compute-0 ceph-mon[74927]: 10.1f scrub ok
Nov 24 18:24:52 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Nov 24 18:24:52 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Nov 24 18:24:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:24:52 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v270: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:24:52 compute-0 ceph-mon[74927]: 8.9 scrub starts
Nov 24 18:24:52 compute-0 ceph-mon[74927]: 8.9 scrub ok
Nov 24 18:24:52 compute-0 ceph-mon[74927]: 5.11 scrub starts
Nov 24 18:24:52 compute-0 ceph-mon[74927]: 5.11 scrub ok
Nov 24 18:24:52 compute-0 sudo[113399]: pam_unix(sudo:session): session closed for user root
Nov 24 18:24:53 compute-0 sudo[113596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfvjuzngjxcletrzapehxhuvtiseocyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008693.0484157-137-34206606293176/AnsiballZ_command.py'
Nov 24 18:24:53 compute-0 sudo[113596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:24:53 compute-0 python3.9[113598]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:24:53 compute-0 ceph-mon[74927]: 11.6 scrub starts
Nov 24 18:24:53 compute-0 ceph-mon[74927]: 11.6 scrub ok
Nov 24 18:24:53 compute-0 ceph-mon[74927]: pgmap v270: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:24:54 compute-0 sudo[113596]: pam_unix(sudo:session): session closed for user root
Nov 24 18:24:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Nov 24 18:24:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Nov 24 18:24:54 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v271: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:24:54 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Nov 24 18:24:54 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Nov 24 18:24:55 compute-0 sudo[113884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psdomjkkdzgcelbtgqnmzhqhsijicbsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008694.37266-145-67669512950611/AnsiballZ_selinux.py'
Nov 24 18:24:55 compute-0 sudo[113884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:24:55 compute-0 python3.9[113886]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 24 18:24:55 compute-0 sudo[113884]: pam_unix(sudo:session): session closed for user root
Nov 24 18:24:55 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Nov 24 18:24:55 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Nov 24 18:24:55 compute-0 ceph-mon[74927]: 2.1b scrub starts
Nov 24 18:24:55 compute-0 ceph-mon[74927]: 2.1b scrub ok
Nov 24 18:24:55 compute-0 ceph-mon[74927]: pgmap v271: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:24:55 compute-0 ceph-mon[74927]: 8.15 scrub starts
Nov 24 18:24:55 compute-0 ceph-mon[74927]: 8.15 scrub ok
Nov 24 18:24:55 compute-0 sudo[114036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nblnyedfgdfxzzvfxoaorgkbytfuqflz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008695.6797428-156-1311343305683/AnsiballZ_command.py'
Nov 24 18:24:55 compute-0 sudo[114036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:24:56 compute-0 python3.9[114038]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 24 18:24:56 compute-0 sudo[114036]: pam_unix(sudo:session): session closed for user root
Nov 24 18:24:56 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v272: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:24:56 compute-0 sudo[114188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgvwlqhlhkvjefslmjnxcxgpouyfwrqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008696.4280887-164-8101570326788/AnsiballZ_file.py'
Nov 24 18:24:56 compute-0 sudo[114188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:24:56 compute-0 ceph-mon[74927]: 11.15 scrub starts
Nov 24 18:24:56 compute-0 ceph-mon[74927]: 11.15 scrub ok
Nov 24 18:24:56 compute-0 python3.9[114190]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:24:56 compute-0 sudo[114188]: pam_unix(sudo:session): session closed for user root
Nov 24 18:24:57 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.f scrub starts
Nov 24 18:24:57 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.f scrub ok
Nov 24 18:24:57 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Nov 24 18:24:57 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Nov 24 18:24:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:24:57 compute-0 sudo[114340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuxchfhutccpgwaixelqperneejfuayu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008697.0796382-172-125632498769812/AnsiballZ_mount.py'
Nov 24 18:24:57 compute-0 sudo[114340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:24:57 compute-0 python3.9[114342]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 24 18:24:57 compute-0 ceph-mon[74927]: pgmap v272: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:24:57 compute-0 sudo[114340]: pam_unix(sudo:session): session closed for user root
Nov 24 18:24:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.c deep-scrub starts
Nov 24 18:24:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.c deep-scrub ok
Nov 24 18:24:58 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v273: 321 pgs: 321 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:24:58 compute-0 sudo[114492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvklcewssfdcludnwofdybuocwqfqxxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008698.4912019-200-75312925294912/AnsiballZ_file.py'
Nov 24 18:24:58 compute-0 sudo[114492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:24:58 compute-0 ceph-mon[74927]: 8.f scrub starts
Nov 24 18:24:58 compute-0 ceph-mon[74927]: 8.f scrub ok
Nov 24 18:24:58 compute-0 ceph-mon[74927]: 5.16 scrub starts
Nov 24 18:24:58 compute-0 ceph-mon[74927]: 5.16 scrub ok
Nov 24 18:24:58 compute-0 python3.9[114494]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:24:58 compute-0 sudo[114492]: pam_unix(sudo:session): session closed for user root
Nov 24 18:24:59 compute-0 sudo[114644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avlispaamzwtdofhsknkhxouapprqxsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008699.1210997-208-76716648752024/AnsiballZ_stat.py'
Nov 24 18:24:59 compute-0 sudo[114644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:24:59 compute-0 python3.9[114646]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:24:59 compute-0 sudo[114644]: pam_unix(sudo:session): session closed for user root
Nov 24 18:24:59 compute-0 ceph-mon[74927]: 8.c deep-scrub starts
Nov 24 18:24:59 compute-0 ceph-mon[74927]: 8.c deep-scrub ok
Nov 24 18:24:59 compute-0 ceph-mon[74927]: pgmap v273: 321 pgs: 321 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:24:59 compute-0 sudo[114722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqkqddptegcivibfgwsjzvxiqsodbvdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008699.1210997-208-76716648752024/AnsiballZ_file.py'
Nov 24 18:24:59 compute-0 sudo[114722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:25:00 compute-0 python3.9[114724]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:25:00 compute-0 sudo[114722]: pam_unix(sudo:session): session closed for user root
Nov 24 18:25:00 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v274: 321 pgs: 321 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:00 compute-0 sudo[114874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibvxuvuboeslwcozggabwycptwzczyku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008700.5629165-229-125860288087570/AnsiballZ_stat.py'
Nov 24 18:25:00 compute-0 sudo[114874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:25:00 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Nov 24 18:25:00 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Nov 24 18:25:00 compute-0 python3.9[114876]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:25:00 compute-0 sudo[114874]: pam_unix(sudo:session): session closed for user root
Nov 24 18:25:01 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Nov 24 18:25:01 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Nov 24 18:25:01 compute-0 sudo[115028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxybbukpjtnmjyhxjuixgyrrkalhakpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008701.422314-242-165256264783803/AnsiballZ_getent.py'
Nov 24 18:25:01 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Nov 24 18:25:01 compute-0 sudo[115028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:25:01 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Nov 24 18:25:01 compute-0 ceph-mon[74927]: pgmap v274: 321 pgs: 321 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:01 compute-0 ceph-mon[74927]: 11.2 scrub starts
Nov 24 18:25:01 compute-0 ceph-mon[74927]: 11.2 scrub ok
Nov 24 18:25:01 compute-0 python3.9[115030]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 24 18:25:02 compute-0 sudo[115028]: pam_unix(sudo:session): session closed for user root
Nov 24 18:25:02 compute-0 sudo[115181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lntyyaleriqiuolmtngnwvwhapbwufvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008702.2199101-252-63170500738118/AnsiballZ_getent.py'
Nov 24 18:25:02 compute-0 sudo[115181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:25:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:25:02 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v275: 321 pgs: 321 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:02 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Nov 24 18:25:02 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Nov 24 18:25:02 compute-0 python3.9[115183]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 24 18:25:02 compute-0 sudo[115181]: pam_unix(sudo:session): session closed for user root
Nov 24 18:25:02 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Nov 24 18:25:02 compute-0 ceph-mon[74927]: 5.12 scrub starts
Nov 24 18:25:02 compute-0 ceph-mon[74927]: 5.12 scrub ok
Nov 24 18:25:02 compute-0 ceph-mon[74927]: 11.3 scrub starts
Nov 24 18:25:02 compute-0 ceph-mon[74927]: 11.3 scrub ok
Nov 24 18:25:02 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Nov 24 18:25:03 compute-0 sudo[115335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmadrtruwshiavjqxvmugovnbespzzdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008702.8450613-260-7234941643113/AnsiballZ_group.py'
Nov 24 18:25:03 compute-0 sudo[115335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:25:03 compute-0 python3.9[115337]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 18:25:03 compute-0 sudo[115335]: pam_unix(sudo:session): session closed for user root
Nov 24 18:25:03 compute-0 sudo[115487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxppyiqxurwznaomrqyhjvtvtthkpksv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008703.6957235-269-160044470414008/AnsiballZ_file.py'
Nov 24 18:25:03 compute-0 sudo[115487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:25:03 compute-0 ceph-mon[74927]: pgmap v275: 321 pgs: 321 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:03 compute-0 ceph-mon[74927]: 8.1 scrub starts
Nov 24 18:25:03 compute-0 ceph-mon[74927]: 8.1 scrub ok
Nov 24 18:25:03 compute-0 ceph-mon[74927]: 8.2 scrub starts
Nov 24 18:25:03 compute-0 ceph-mon[74927]: 8.2 scrub ok
Nov 24 18:25:04 compute-0 python3.9[115489]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 24 18:25:04 compute-0 sudo[115487]: pam_unix(sudo:session): session closed for user root
Nov 24 18:25:04 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v276: 321 pgs: 321 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:25:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:25:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:25:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:25:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:25:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:25:04 compute-0 sudo[115639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygdkyaitgisbfakxunztahyxuiqmjuvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008704.5501082-280-165456263821571/AnsiballZ_dnf.py'
Nov 24 18:25:04 compute-0 sudo[115639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:25:05 compute-0 python3.9[115641]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 18:25:06 compute-0 ceph-mon[74927]: pgmap v276: 321 pgs: 321 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:06 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v277: 321 pgs: 321 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:06 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Nov 24 18:25:06 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Nov 24 18:25:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:25:07 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Nov 24 18:25:07 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Nov 24 18:25:08 compute-0 ceph-mon[74927]: pgmap v277: 321 pgs: 321 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:08 compute-0 ceph-mon[74927]: 8.3 scrub starts
Nov 24 18:25:08 compute-0 ceph-mon[74927]: 8.3 scrub ok
Nov 24 18:25:08 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v278: 321 pgs: 321 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:08 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.d scrub starts
Nov 24 18:25:08 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.d scrub ok
Nov 24 18:25:09 compute-0 ceph-mon[74927]: 8.5 scrub starts
Nov 24 18:25:09 compute-0 ceph-mon[74927]: 8.5 scrub ok
Nov 24 18:25:09 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Nov 24 18:25:09 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Nov 24 18:25:10 compute-0 ceph-mon[74927]: pgmap v278: 321 pgs: 321 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:10 compute-0 ceph-mon[74927]: 11.d scrub starts
Nov 24 18:25:10 compute-0 ceph-mon[74927]: 11.d scrub ok
Nov 24 18:25:10 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.f scrub starts
Nov 24 18:25:10 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.f scrub ok
Nov 24 18:25:10 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v279: 321 pgs: 321 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:11 compute-0 ceph-mon[74927]: 11.8 scrub starts
Nov 24 18:25:11 compute-0 ceph-mon[74927]: 11.8 scrub ok
Nov 24 18:25:11 compute-0 ceph-mon[74927]: 11.f scrub starts
Nov 24 18:25:11 compute-0 ceph-mon[74927]: 11.f scrub ok
Nov 24 18:25:11 compute-0 rsyslogd[1008]: imjournal: 313 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 24 18:25:11 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.1 deep-scrub starts
Nov 24 18:25:11 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.1 deep-scrub ok
Nov 24 18:25:12 compute-0 ceph-mon[74927]: pgmap v279: 321 pgs: 321 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:12 compute-0 ceph-mon[74927]: 11.1 deep-scrub starts
Nov 24 18:25:12 compute-0 ceph-mon[74927]: 11.1 deep-scrub ok
Nov 24 18:25:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:25:12 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v280: 321 pgs: 321 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:13 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Nov 24 18:25:13 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Nov 24 18:25:14 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.d scrub starts
Nov 24 18:25:14 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.d scrub ok
Nov 24 18:25:14 compute-0 ceph-mon[74927]: pgmap v280: 321 pgs: 321 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:14 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.e deep-scrub starts
Nov 24 18:25:14 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.e deep-scrub ok
Nov 24 18:25:14 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v281: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:14 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Nov 24 18:25:14 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Nov 24 18:25:15 compute-0 ceph-mon[74927]: 8.7 scrub starts
Nov 24 18:25:15 compute-0 ceph-mon[74927]: 8.7 scrub ok
Nov 24 18:25:15 compute-0 ceph-mon[74927]: 8.d scrub starts
Nov 24 18:25:15 compute-0 ceph-mon[74927]: 8.d scrub ok
Nov 24 18:25:15 compute-0 ceph-mon[74927]: 8.e deep-scrub starts
Nov 24 18:25:15 compute-0 ceph-mon[74927]: 8.e deep-scrub ok
Nov 24 18:25:15 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.a scrub starts
Nov 24 18:25:15 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.a scrub ok
Nov 24 18:25:16 compute-0 ceph-mon[74927]: pgmap v281: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:16 compute-0 ceph-mon[74927]: 8.8 scrub starts
Nov 24 18:25:16 compute-0 ceph-mon[74927]: 8.8 scrub ok
Nov 24 18:25:16 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v282: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:16 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Nov 24 18:25:16 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Nov 24 18:25:17 compute-0 ceph-mon[74927]: 8.a scrub starts
Nov 24 18:25:17 compute-0 ceph-mon[74927]: 8.a scrub ok
Nov 24 18:25:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:25:17 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Nov 24 18:25:17 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Nov 24 18:25:17 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Nov 24 18:25:17 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Nov 24 18:25:17 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Nov 24 18:25:17 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Nov 24 18:25:18 compute-0 ceph-mon[74927]: pgmap v282: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:18 compute-0 ceph-mon[74927]: 11.9 scrub starts
Nov 24 18:25:18 compute-0 ceph-mon[74927]: 11.9 scrub ok
Nov 24 18:25:18 compute-0 ceph-mon[74927]: 11.19 scrub starts
Nov 24 18:25:18 compute-0 ceph-mon[74927]: 11.19 scrub ok
Nov 24 18:25:18 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v283: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:18 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Nov 24 18:25:18 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Nov 24 18:25:19 compute-0 ceph-mon[74927]: 8.13 scrub starts
Nov 24 18:25:19 compute-0 ceph-mon[74927]: 8.13 scrub ok
Nov 24 18:25:19 compute-0 ceph-mon[74927]: 11.18 scrub starts
Nov 24 18:25:19 compute-0 ceph-mon[74927]: 11.18 scrub ok
Nov 24 18:25:19 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Nov 24 18:25:19 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Nov 24 18:25:19 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Nov 24 18:25:19 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Nov 24 18:25:20 compute-0 ceph-mon[74927]: pgmap v283: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:20 compute-0 ceph-mon[74927]: 8.16 scrub starts
Nov 24 18:25:20 compute-0 ceph-mon[74927]: 8.16 scrub ok
Nov 24 18:25:20 compute-0 ceph-mon[74927]: 11.17 scrub starts
Nov 24 18:25:20 compute-0 ceph-mon[74927]: 11.17 scrub ok
Nov 24 18:25:20 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v284: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:20 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.e scrub starts
Nov 24 18:25:20 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.e scrub ok
Nov 24 18:25:21 compute-0 ceph-mon[74927]: 8.17 scrub starts
Nov 24 18:25:21 compute-0 ceph-mon[74927]: 8.17 scrub ok
Nov 24 18:25:21 compute-0 ceph-mon[74927]: 11.e scrub starts
Nov 24 18:25:21 compute-0 ceph-mon[74927]: 11.e scrub ok
Nov 24 18:25:22 compute-0 ceph-mon[74927]: pgmap v284: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:22 compute-0 sudo[115733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:25:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:25:22 compute-0 sudo[115733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:25:22 compute-0 sudo[115733]: pam_unix(sudo:session): session closed for user root
Nov 24 18:25:22 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v285: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:22 compute-0 sudo[115758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:25:22 compute-0 sudo[115758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:25:22 compute-0 sudo[115758]: pam_unix(sudo:session): session closed for user root
Nov 24 18:25:22 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Nov 24 18:25:22 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Nov 24 18:25:22 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Nov 24 18:25:22 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Nov 24 18:25:22 compute-0 sudo[115783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:25:22 compute-0 sudo[115783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:25:22 compute-0 sudo[115783]: pam_unix(sudo:session): session closed for user root
Nov 24 18:25:22 compute-0 sudo[115808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 18:25:22 compute-0 sudo[115808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:25:22 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Nov 24 18:25:22 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Nov 24 18:25:23 compute-0 sudo[115808]: pam_unix(sudo:session): session closed for user root
Nov 24 18:25:23 compute-0 ceph-mon[74927]: 8.18 scrub starts
Nov 24 18:25:23 compute-0 ceph-mon[74927]: 8.18 scrub ok
Nov 24 18:25:23 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:25:23 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:25:23 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:25:23 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:25:23 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:25:23 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:25:23 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 7166ba42-589a-4ec5-b9fc-eea097202db6 does not exist
Nov 24 18:25:23 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 9702d538-64cd-452e-98d7-ab0c74be62b2 does not exist
Nov 24 18:25:23 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 765e62f6-22fd-4519-be66-d715617981d5 does not exist
Nov 24 18:25:23 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 18:25:23 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:25:23 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 18:25:23 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:25:23 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:25:23 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:25:23 compute-0 sudo[115862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:25:23 compute-0 sudo[115862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:25:23 compute-0 sudo[115862]: pam_unix(sudo:session): session closed for user root
Nov 24 18:25:23 compute-0 sudo[115887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:25:23 compute-0 sudo[115887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:25:23 compute-0 sudo[115887]: pam_unix(sudo:session): session closed for user root
Nov 24 18:25:23 compute-0 sudo[115912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:25:23 compute-0 sudo[115912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:25:23 compute-0 sudo[115912]: pam_unix(sudo:session): session closed for user root
Nov 24 18:25:23 compute-0 sudo[115937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 18:25:23 compute-0 sudo[115937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:25:23 compute-0 podman[116003]: 2025-11-24 18:25:23.608701126 +0000 UTC m=+0.041664157 container create a363bae351f83d052411f9da6163f3de97c33e57852310dddb6667ff398fe6fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Nov 24 18:25:23 compute-0 systemd[1]: Started libpod-conmon-a363bae351f83d052411f9da6163f3de97c33e57852310dddb6667ff398fe6fc.scope.
Nov 24 18:25:23 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:25:23 compute-0 podman[116003]: 2025-11-24 18:25:23.592805431 +0000 UTC m=+0.025768492 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:25:23 compute-0 podman[116003]: 2025-11-24 18:25:23.686349776 +0000 UTC m=+0.119312867 container init a363bae351f83d052411f9da6163f3de97c33e57852310dddb6667ff398fe6fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:25:23 compute-0 podman[116003]: 2025-11-24 18:25:23.693988406 +0000 UTC m=+0.126951447 container start a363bae351f83d052411f9da6163f3de97c33e57852310dddb6667ff398fe6fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_payne, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 18:25:23 compute-0 podman[116003]: 2025-11-24 18:25:23.697620577 +0000 UTC m=+0.130583628 container attach a363bae351f83d052411f9da6163f3de97c33e57852310dddb6667ff398fe6fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_payne, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:25:23 compute-0 cranky_payne[116019]: 167 167
Nov 24 18:25:23 compute-0 systemd[1]: libpod-a363bae351f83d052411f9da6163f3de97c33e57852310dddb6667ff398fe6fc.scope: Deactivated successfully.
Nov 24 18:25:23 compute-0 podman[116003]: 2025-11-24 18:25:23.700835596 +0000 UTC m=+0.133798647 container died a363bae351f83d052411f9da6163f3de97c33e57852310dddb6667ff398fe6fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_payne, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:25:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-bca856c7ac6aee16e1cf39792e957e24f334448d527ae7b5d618801fd2aaa209-merged.mount: Deactivated successfully.
Nov 24 18:25:23 compute-0 podman[116003]: 2025-11-24 18:25:23.737091898 +0000 UTC m=+0.170054939 container remove a363bae351f83d052411f9da6163f3de97c33e57852310dddb6667ff398fe6fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_payne, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:25:23 compute-0 systemd[1]: libpod-conmon-a363bae351f83d052411f9da6163f3de97c33e57852310dddb6667ff398fe6fc.scope: Deactivated successfully.
Nov 24 18:25:23 compute-0 podman[116043]: 2025-11-24 18:25:23.894308976 +0000 UTC m=+0.039329139 container create 38b44d80568d396ff6c619a4804f830d7b2400db99ac9a25249446e94865e39e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:25:23 compute-0 systemd[1]: Started libpod-conmon-38b44d80568d396ff6c619a4804f830d7b2400db99ac9a25249446e94865e39e.scope.
Nov 24 18:25:23 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.1b deep-scrub starts
Nov 24 18:25:23 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.1b deep-scrub ok
Nov 24 18:25:23 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:25:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa9a32f217aca668618b4be74770495ec11853c533bb9741c646092d908f8209/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:25:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa9a32f217aca668618b4be74770495ec11853c533bb9741c646092d908f8209/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:25:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa9a32f217aca668618b4be74770495ec11853c533bb9741c646092d908f8209/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:25:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa9a32f217aca668618b4be74770495ec11853c533bb9741c646092d908f8209/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:25:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa9a32f217aca668618b4be74770495ec11853c533bb9741c646092d908f8209/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:25:23 compute-0 podman[116043]: 2025-11-24 18:25:23.967925046 +0000 UTC m=+0.112945239 container init 38b44d80568d396ff6c619a4804f830d7b2400db99ac9a25249446e94865e39e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_cerf, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:25:23 compute-0 podman[116043]: 2025-11-24 18:25:23.875073408 +0000 UTC m=+0.020093611 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:25:23 compute-0 podman[116043]: 2025-11-24 18:25:23.977845863 +0000 UTC m=+0.122866036 container start 38b44d80568d396ff6c619a4804f830d7b2400db99ac9a25249446e94865e39e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_cerf, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:25:23 compute-0 podman[116043]: 2025-11-24 18:25:23.980879298 +0000 UTC m=+0.125899501 container attach 38b44d80568d396ff6c619a4804f830d7b2400db99ac9a25249446e94865e39e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 24 18:25:24 compute-0 ceph-mon[74927]: pgmap v285: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:24 compute-0 ceph-mon[74927]: 8.19 scrub starts
Nov 24 18:25:24 compute-0 ceph-mon[74927]: 8.19 scrub ok
Nov 24 18:25:24 compute-0 ceph-mon[74927]: 8.4 scrub starts
Nov 24 18:25:24 compute-0 ceph-mon[74927]: 8.4 scrub ok
Nov 24 18:25:24 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:25:24 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:25:24 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:25:24 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:25:24 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:25:24 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:25:24 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Nov 24 18:25:24 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v286: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:24 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Nov 24 18:25:24 compute-0 strange_cerf[116060]: --> passed data devices: 0 physical, 3 LVM
Nov 24 18:25:24 compute-0 strange_cerf[116060]: --> relative data size: 1.0
Nov 24 18:25:24 compute-0 strange_cerf[116060]: --> All data devices are unavailable
Nov 24 18:25:25 compute-0 systemd[1]: libpod-38b44d80568d396ff6c619a4804f830d7b2400db99ac9a25249446e94865e39e.scope: Deactivated successfully.
Nov 24 18:25:25 compute-0 conmon[116060]: conmon 38b44d80568d396ff6c6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-38b44d80568d396ff6c619a4804f830d7b2400db99ac9a25249446e94865e39e.scope/container/memory.events
Nov 24 18:25:25 compute-0 podman[116043]: 2025-11-24 18:25:25.030195393 +0000 UTC m=+1.175215576 container died 38b44d80568d396ff6c619a4804f830d7b2400db99ac9a25249446e94865e39e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_cerf, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:25:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa9a32f217aca668618b4be74770495ec11853c533bb9741c646092d908f8209-merged.mount: Deactivated successfully.
Nov 24 18:25:25 compute-0 podman[116043]: 2025-11-24 18:25:25.090327488 +0000 UTC m=+1.235347661 container remove 38b44d80568d396ff6c619a4804f830d7b2400db99ac9a25249446e94865e39e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_cerf, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:25:25 compute-0 systemd[1]: libpod-conmon-38b44d80568d396ff6c619a4804f830d7b2400db99ac9a25249446e94865e39e.scope: Deactivated successfully.
Nov 24 18:25:25 compute-0 sudo[115937]: pam_unix(sudo:session): session closed for user root
Nov 24 18:25:25 compute-0 ceph-mon[74927]: 11.1b deep-scrub starts
Nov 24 18:25:25 compute-0 ceph-mon[74927]: 11.1b deep-scrub ok
Nov 24 18:25:25 compute-0 sudo[116107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:25:25 compute-0 sudo[116107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:25:25 compute-0 sudo[116107]: pam_unix(sudo:session): session closed for user root
Nov 24 18:25:25 compute-0 sudo[116132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:25:25 compute-0 sudo[116132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:25:25 compute-0 sudo[116132]: pam_unix(sudo:session): session closed for user root
Nov 24 18:25:25 compute-0 sudo[116157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:25:25 compute-0 sudo[116157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:25:25 compute-0 sudo[116157]: pam_unix(sudo:session): session closed for user root
Nov 24 18:25:25 compute-0 sudo[116182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- lvm list --format json
Nov 24 18:25:25 compute-0 sudo[116182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:25:25 compute-0 podman[116248]: 2025-11-24 18:25:25.706472706 +0000 UTC m=+0.041790980 container create 1d566b7f3d4261596b82fbc6851f1b01fdd47bc5281f1eb1523da4f8af1f2cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_poincare, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:25:25 compute-0 systemd[1]: Started libpod-conmon-1d566b7f3d4261596b82fbc6851f1b01fdd47bc5281f1eb1523da4f8af1f2cfa.scope.
Nov 24 18:25:25 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:25:25 compute-0 podman[116248]: 2025-11-24 18:25:25.689844522 +0000 UTC m=+0.025162786 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:25:25 compute-0 podman[116248]: 2025-11-24 18:25:25.789396397 +0000 UTC m=+0.124714681 container init 1d566b7f3d4261596b82fbc6851f1b01fdd47bc5281f1eb1523da4f8af1f2cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:25:25 compute-0 podman[116248]: 2025-11-24 18:25:25.796850643 +0000 UTC m=+0.132168877 container start 1d566b7f3d4261596b82fbc6851f1b01fdd47bc5281f1eb1523da4f8af1f2cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_poincare, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:25:25 compute-0 podman[116248]: 2025-11-24 18:25:25.800047212 +0000 UTC m=+0.135365506 container attach 1d566b7f3d4261596b82fbc6851f1b01fdd47bc5281f1eb1523da4f8af1f2cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:25:25 compute-0 vibrant_poincare[116264]: 167 167
Nov 24 18:25:25 compute-0 systemd[1]: libpod-1d566b7f3d4261596b82fbc6851f1b01fdd47bc5281f1eb1523da4f8af1f2cfa.scope: Deactivated successfully.
Nov 24 18:25:25 compute-0 podman[116248]: 2025-11-24 18:25:25.802577015 +0000 UTC m=+0.137895259 container died 1d566b7f3d4261596b82fbc6851f1b01fdd47bc5281f1eb1523da4f8af1f2cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 18:25:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-e65217805defbe0f795a9e85379894c43f536d6a5a714679e95e67f51a0ffe0d-merged.mount: Deactivated successfully.
Nov 24 18:25:25 compute-0 podman[116248]: 2025-11-24 18:25:25.855108971 +0000 UTC m=+0.190427255 container remove 1d566b7f3d4261596b82fbc6851f1b01fdd47bc5281f1eb1523da4f8af1f2cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_poincare, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Nov 24 18:25:25 compute-0 systemd[1]: libpod-conmon-1d566b7f3d4261596b82fbc6851f1b01fdd47bc5281f1eb1523da4f8af1f2cfa.scope: Deactivated successfully.
Nov 24 18:25:26 compute-0 podman[116288]: 2025-11-24 18:25:26.046100119 +0000 UTC m=+0.053326157 container create 0786bb67dea9ba99e8550c320e43915d4ebacfa3d8b7b86be2a431ecd2b66c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_nash, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 18:25:26 compute-0 systemd[1]: Started libpod-conmon-0786bb67dea9ba99e8550c320e43915d4ebacfa3d8b7b86be2a431ecd2b66c29.scope.
Nov 24 18:25:26 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:25:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f560a4cc31bab8faf509c96f699cc2b83fa70a9343fb3201b1e8e9379ef89f53/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:25:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f560a4cc31bab8faf509c96f699cc2b83fa70a9343fb3201b1e8e9379ef89f53/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:25:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f560a4cc31bab8faf509c96f699cc2b83fa70a9343fb3201b1e8e9379ef89f53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:25:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f560a4cc31bab8faf509c96f699cc2b83fa70a9343fb3201b1e8e9379ef89f53/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:25:26 compute-0 podman[116288]: 2025-11-24 18:25:26.017863907 +0000 UTC m=+0.025089955 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:25:26 compute-0 podman[116288]: 2025-11-24 18:25:26.127222736 +0000 UTC m=+0.134448804 container init 0786bb67dea9ba99e8550c320e43915d4ebacfa3d8b7b86be2a431ecd2b66c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:25:26 compute-0 podman[116288]: 2025-11-24 18:25:26.144716951 +0000 UTC m=+0.151942999 container start 0786bb67dea9ba99e8550c320e43915d4ebacfa3d8b7b86be2a431ecd2b66c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:25:26 compute-0 podman[116288]: 2025-11-24 18:25:26.148995117 +0000 UTC m=+0.156341008 container attach 0786bb67dea9ba99e8550c320e43915d4ebacfa3d8b7b86be2a431ecd2b66c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_nash, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 24 18:25:26 compute-0 ceph-mon[74927]: 8.1e scrub starts
Nov 24 18:25:26 compute-0 ceph-mon[74927]: pgmap v286: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:26 compute-0 ceph-mon[74927]: 8.1e scrub ok
Nov 24 18:25:26 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Nov 24 18:25:26 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v287: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:26 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Nov 24 18:25:26 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Nov 24 18:25:26 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]: {
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:     "0": [
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:         {
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:             "devices": [
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:                 "/dev/loop3"
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:             ],
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:             "lv_name": "ceph_lv0",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:             "lv_size": "21470642176",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:             "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:             "name": "ceph_lv0",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:             "tags": {
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:                 "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:                 "ceph.cluster_name": "ceph",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:                 "ceph.crush_device_class": "",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:                 "ceph.encrypted": "0",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:                 "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:                 "ceph.osd_id": "0",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:                 "ceph.type": "block",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:                 "ceph.vdo": "0"
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:             },
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:             "type": "block",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:             "vg_name": "ceph_vg0"
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:         }
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:     ],
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:     "1": [
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:         {
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:             "devices": [
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:                 "/dev/loop4"
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:             ],
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:             "lv_name": "ceph_lv1",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:             "lv_size": "21470642176",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:             "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:             "name": "ceph_lv1",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:             "tags": {
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:                 "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:                 "ceph.cluster_name": "ceph",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:                 "ceph.crush_device_class": "",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:                 "ceph.encrypted": "0",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:                 "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:                 "ceph.osd_id": "1",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:                 "ceph.type": "block",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:                 "ceph.vdo": "0"
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:             },
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:             "type": "block",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:             "vg_name": "ceph_vg1"
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:         }
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:     ],
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:     "2": [
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:         {
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:             "devices": [
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:                 "/dev/loop5"
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:             ],
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:             "lv_name": "ceph_lv2",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:             "lv_size": "21470642176",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:             "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:             "name": "ceph_lv2",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:             "tags": {
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:                 "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:                 "ceph.cluster_name": "ceph",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:                 "ceph.crush_device_class": "",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:                 "ceph.encrypted": "0",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:                 "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:                 "ceph.osd_id": "2",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:                 "ceph.type": "block",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:                 "ceph.vdo": "0"
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:             },
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:             "type": "block",
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:             "vg_name": "ceph_vg2"
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:         }
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]:     ]
Nov 24 18:25:26 compute-0 flamboyant_nash[116306]: }
Nov 24 18:25:26 compute-0 systemd[1]: libpod-0786bb67dea9ba99e8550c320e43915d4ebacfa3d8b7b86be2a431ecd2b66c29.scope: Deactivated successfully.
Nov 24 18:25:26 compute-0 podman[116288]: 2025-11-24 18:25:26.920455165 +0000 UTC m=+0.927681173 container died 0786bb67dea9ba99e8550c320e43915d4ebacfa3d8b7b86be2a431ecd2b66c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_nash, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:25:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-f560a4cc31bab8faf509c96f699cc2b83fa70a9343fb3201b1e8e9379ef89f53-merged.mount: Deactivated successfully.
Nov 24 18:25:26 compute-0 podman[116288]: 2025-11-24 18:25:26.971281349 +0000 UTC m=+0.978507357 container remove 0786bb67dea9ba99e8550c320e43915d4ebacfa3d8b7b86be2a431ecd2b66c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_nash, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:25:26 compute-0 systemd[1]: libpod-conmon-0786bb67dea9ba99e8550c320e43915d4ebacfa3d8b7b86be2a431ecd2b66c29.scope: Deactivated successfully.
Nov 24 18:25:27 compute-0 sudo[116182]: pam_unix(sudo:session): session closed for user root
Nov 24 18:25:27 compute-0 sudo[116338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:25:27 compute-0 sudo[116338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:25:27 compute-0 sudo[116338]: pam_unix(sudo:session): session closed for user root
Nov 24 18:25:27 compute-0 sudo[116364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:25:27 compute-0 sudo[116364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:25:27 compute-0 sudo[116364]: pam_unix(sudo:session): session closed for user root
Nov 24 18:25:27 compute-0 ceph-mon[74927]: 8.1f scrub starts
Nov 24 18:25:27 compute-0 ceph-mon[74927]: 8.1f scrub ok
Nov 24 18:25:27 compute-0 sudo[116389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:25:27 compute-0 sudo[116389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:25:27 compute-0 sudo[116389]: pam_unix(sudo:session): session closed for user root
Nov 24 18:25:27 compute-0 sudo[116414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- raw list --format json
Nov 24 18:25:27 compute-0 sudo[116414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:25:27 compute-0 podman[116479]: 2025-11-24 18:25:27.504078234 +0000 UTC m=+0.034478868 container create c44fc5e1b021f22abf8c6f0bd3f7cd6e6c0edbcbf2698fd6fd075ed6b3f913cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 18:25:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:25:27 compute-0 systemd[1]: Started libpod-conmon-c44fc5e1b021f22abf8c6f0bd3f7cd6e6c0edbcbf2698fd6fd075ed6b3f913cf.scope.
Nov 24 18:25:27 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:25:27 compute-0 podman[116479]: 2025-11-24 18:25:27.571434189 +0000 UTC m=+0.101834863 container init c44fc5e1b021f22abf8c6f0bd3f7cd6e6c0edbcbf2698fd6fd075ed6b3f913cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 18:25:27 compute-0 podman[116479]: 2025-11-24 18:25:27.576481914 +0000 UTC m=+0.106882558 container start c44fc5e1b021f22abf8c6f0bd3f7cd6e6c0edbcbf2698fd6fd075ed6b3f913cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_yalow, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:25:27 compute-0 podman[116479]: 2025-11-24 18:25:27.579056158 +0000 UTC m=+0.109456822 container attach c44fc5e1b021f22abf8c6f0bd3f7cd6e6c0edbcbf2698fd6fd075ed6b3f913cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 24 18:25:27 compute-0 naughty_yalow[116495]: 167 167
Nov 24 18:25:27 compute-0 systemd[1]: libpod-c44fc5e1b021f22abf8c6f0bd3f7cd6e6c0edbcbf2698fd6fd075ed6b3f913cf.scope: Deactivated successfully.
Nov 24 18:25:27 compute-0 podman[116479]: 2025-11-24 18:25:27.583230112 +0000 UTC m=+0.113630756 container died c44fc5e1b021f22abf8c6f0bd3f7cd6e6c0edbcbf2698fd6fd075ed6b3f913cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_yalow, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:25:27 compute-0 podman[116479]: 2025-11-24 18:25:27.489935243 +0000 UTC m=+0.020335907 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:25:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-97c31b61c69bddc5186830d0cb49db872638d16a7fcb06df947625a639c48777-merged.mount: Deactivated successfully.
Nov 24 18:25:27 compute-0 podman[116479]: 2025-11-24 18:25:27.613891564 +0000 UTC m=+0.144292208 container remove c44fc5e1b021f22abf8c6f0bd3f7cd6e6c0edbcbf2698fd6fd075ed6b3f913cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_yalow, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 18:25:27 compute-0 systemd[1]: libpod-conmon-c44fc5e1b021f22abf8c6f0bd3f7cd6e6c0edbcbf2698fd6fd075ed6b3f913cf.scope: Deactivated successfully.
Nov 24 18:25:27 compute-0 podman[116519]: 2025-11-24 18:25:27.758884769 +0000 UTC m=+0.050074526 container create 1f89d947d1fa8fd1fae876399d4d22b9168b4ff51be93e962d0edd293dd19a1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:25:27 compute-0 systemd[1]: Started libpod-conmon-1f89d947d1fa8fd1fae876399d4d22b9168b4ff51be93e962d0edd293dd19a1d.scope.
Nov 24 18:25:27 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:25:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be9ca8e276597f7ab863b81c48200cf36d67930309351c5d9a589c01e51693a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:25:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be9ca8e276597f7ab863b81c48200cf36d67930309351c5d9a589c01e51693a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:25:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be9ca8e276597f7ab863b81c48200cf36d67930309351c5d9a589c01e51693a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:25:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be9ca8e276597f7ab863b81c48200cf36d67930309351c5d9a589c01e51693a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:25:27 compute-0 podman[116519]: 2025-11-24 18:25:27.733771624 +0000 UTC m=+0.024961461 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:25:27 compute-0 podman[116519]: 2025-11-24 18:25:27.834705884 +0000 UTC m=+0.125895711 container init 1f89d947d1fa8fd1fae876399d4d22b9168b4ff51be93e962d0edd293dd19a1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 18:25:27 compute-0 podman[116519]: 2025-11-24 18:25:27.846721892 +0000 UTC m=+0.137911679 container start 1f89d947d1fa8fd1fae876399d4d22b9168b4ff51be93e962d0edd293dd19a1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 18:25:27 compute-0 podman[116519]: 2025-11-24 18:25:27.850597949 +0000 UTC m=+0.141787786 container attach 1f89d947d1fa8fd1fae876399d4d22b9168b4ff51be93e962d0edd293dd19a1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Nov 24 18:25:28 compute-0 ceph-mon[74927]: 9.2 scrub starts
Nov 24 18:25:28 compute-0 ceph-mon[74927]: pgmap v287: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:28 compute-0 ceph-mon[74927]: 9.2 scrub ok
Nov 24 18:25:28 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v288: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:28 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Nov 24 18:25:28 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Nov 24 18:25:28 compute-0 ecstatic_galois[116535]: {
Nov 24 18:25:28 compute-0 ecstatic_galois[116535]:     "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 18:25:28 compute-0 ecstatic_galois[116535]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:25:28 compute-0 ecstatic_galois[116535]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 18:25:28 compute-0 ecstatic_galois[116535]:         "osd_id": 0,
Nov 24 18:25:28 compute-0 ecstatic_galois[116535]:         "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:25:28 compute-0 ecstatic_galois[116535]:         "type": "bluestore"
Nov 24 18:25:28 compute-0 ecstatic_galois[116535]:     },
Nov 24 18:25:28 compute-0 ecstatic_galois[116535]:     "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 18:25:28 compute-0 ecstatic_galois[116535]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:25:28 compute-0 ecstatic_galois[116535]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 18:25:28 compute-0 ecstatic_galois[116535]:         "osd_id": 1,
Nov 24 18:25:28 compute-0 ecstatic_galois[116535]:         "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:25:28 compute-0 ecstatic_galois[116535]:         "type": "bluestore"
Nov 24 18:25:28 compute-0 ecstatic_galois[116535]:     },
Nov 24 18:25:28 compute-0 ecstatic_galois[116535]:     "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 18:25:28 compute-0 ecstatic_galois[116535]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:25:28 compute-0 ecstatic_galois[116535]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 18:25:28 compute-0 ecstatic_galois[116535]:         "osd_id": 2,
Nov 24 18:25:28 compute-0 ecstatic_galois[116535]:         "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:25:28 compute-0 ecstatic_galois[116535]:         "type": "bluestore"
Nov 24 18:25:28 compute-0 ecstatic_galois[116535]:     }
Nov 24 18:25:28 compute-0 ecstatic_galois[116535]: }
Nov 24 18:25:28 compute-0 systemd[1]: libpod-1f89d947d1fa8fd1fae876399d4d22b9168b4ff51be93e962d0edd293dd19a1d.scope: Deactivated successfully.
Nov 24 18:25:28 compute-0 podman[116519]: 2025-11-24 18:25:28.762837396 +0000 UTC m=+1.054027183 container died 1f89d947d1fa8fd1fae876399d4d22b9168b4ff51be93e962d0edd293dd19a1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_galois, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:25:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-be9ca8e276597f7ab863b81c48200cf36d67930309351c5d9a589c01e51693a0-merged.mount: Deactivated successfully.
Nov 24 18:25:28 compute-0 podman[116519]: 2025-11-24 18:25:28.831005331 +0000 UTC m=+1.122195088 container remove 1f89d947d1fa8fd1fae876399d4d22b9168b4ff51be93e962d0edd293dd19a1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_galois, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:25:28 compute-0 systemd[1]: libpod-conmon-1f89d947d1fa8fd1fae876399d4d22b9168b4ff51be93e962d0edd293dd19a1d.scope: Deactivated successfully.
Nov 24 18:25:28 compute-0 sudo[116414]: pam_unix(sudo:session): session closed for user root
Nov 24 18:25:28 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:25:28 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:25:28 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:25:28 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:25:28 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev e8ef98df-9623-4830-b837-788c64e5968d does not exist
Nov 24 18:25:28 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev d2ed7a39-3a7f-4b7c-8303-bce647536687 does not exist
Nov 24 18:25:28 compute-0 sudo[116588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:25:28 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Nov 24 18:25:28 compute-0 sudo[116588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:25:28 compute-0 sudo[116588]: pam_unix(sudo:session): session closed for user root
Nov 24 18:25:28 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Nov 24 18:25:29 compute-0 sudo[116613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:25:29 compute-0 sudo[116613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:25:29 compute-0 sudo[116613]: pam_unix(sudo:session): session closed for user root
Nov 24 18:25:29 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Nov 24 18:25:29 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Nov 24 18:25:29 compute-0 ceph-mon[74927]: pgmap v288: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:29 compute-0 ceph-mon[74927]: 8.1a scrub starts
Nov 24 18:25:29 compute-0 ceph-mon[74927]: 8.1a scrub ok
Nov 24 18:25:29 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:25:29 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:25:29 compute-0 ceph-mon[74927]: 11.1e scrub starts
Nov 24 18:25:29 compute-0 ceph-mon[74927]: 11.1e scrub ok
Nov 24 18:25:29 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Nov 24 18:25:29 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Nov 24 18:25:30 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v289: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:30 compute-0 ceph-mon[74927]: 9.4 scrub starts
Nov 24 18:25:30 compute-0 ceph-mon[74927]: 9.4 scrub ok
Nov 24 18:25:30 compute-0 ceph-mon[74927]: 11.1c scrub starts
Nov 24 18:25:30 compute-0 ceph-mon[74927]: 11.1c scrub ok
Nov 24 18:25:30 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Nov 24 18:25:30 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Nov 24 18:25:31 compute-0 ceph-mon[74927]: pgmap v289: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:31 compute-0 ceph-mon[74927]: 8.12 scrub starts
Nov 24 18:25:31 compute-0 ceph-mon[74927]: 8.12 scrub ok
Nov 24 18:25:32 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:25:32 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v290: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:33 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.a scrub starts
Nov 24 18:25:33 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.a scrub ok
Nov 24 18:25:33 compute-0 ceph-mon[74927]: pgmap v290: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:25:34
Nov 24 18:25:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:25:34 compute-0 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 18:25:34 compute-0 ceph-mgr[75218]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'default.rgw.control', 'vms', '.mgr', 'images', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log']
Nov 24 18:25:34 compute-0 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 18:25:34 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v291: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:34 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.14 deep-scrub starts
Nov 24 18:25:34 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.14 deep-scrub ok
Nov 24 18:25:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:25:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:25:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:25:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:25:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:25:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:25:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:25:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:25:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:25:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:25:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:25:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:25:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:25:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:25:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:25:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:25:34 compute-0 ceph-mon[74927]: 9.a scrub starts
Nov 24 18:25:34 compute-0 ceph-mon[74927]: 9.a scrub ok
Nov 24 18:25:35 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Nov 24 18:25:35 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Nov 24 18:25:35 compute-0 ceph-mon[74927]: pgmap v291: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:35 compute-0 ceph-mon[74927]: 8.14 deep-scrub starts
Nov 24 18:25:35 compute-0 ceph-mon[74927]: 8.14 deep-scrub ok
Nov 24 18:25:36 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Nov 24 18:25:36 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Nov 24 18:25:36 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v292: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:36 compute-0 ceph-mon[74927]: 8.1d scrub starts
Nov 24 18:25:36 compute-0 ceph-mon[74927]: 8.1d scrub ok
Nov 24 18:25:36 compute-0 ceph-mon[74927]: 9.10 scrub starts
Nov 24 18:25:36 compute-0 ceph-mon[74927]: 9.10 scrub ok
Nov 24 18:25:37 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Nov 24 18:25:37 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Nov 24 18:25:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:25:37 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.d scrub starts
Nov 24 18:25:37 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.d scrub ok
Nov 24 18:25:37 compute-0 ceph-mon[74927]: pgmap v292: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:37 compute-0 ceph-mon[74927]: 9.12 scrub starts
Nov 24 18:25:37 compute-0 ceph-mon[74927]: 9.12 scrub ok
Nov 24 18:25:38 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Nov 24 18:25:38 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Nov 24 18:25:38 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v293: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:38 compute-0 ceph-mon[74927]: 10.d scrub starts
Nov 24 18:25:38 compute-0 ceph-mon[74927]: 10.d scrub ok
Nov 24 18:25:39 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Nov 24 18:25:39 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Nov 24 18:25:39 compute-0 ceph-mon[74927]: 10.1e scrub starts
Nov 24 18:25:39 compute-0 ceph-mon[74927]: 10.1e scrub ok
Nov 24 18:25:39 compute-0 ceph-mon[74927]: pgmap v293: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:40 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Nov 24 18:25:40 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Nov 24 18:25:40 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Nov 24 18:25:40 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Nov 24 18:25:40 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v294: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:40 compute-0 ceph-mon[74927]: 11.11 scrub starts
Nov 24 18:25:40 compute-0 ceph-mon[74927]: 11.11 scrub ok
Nov 24 18:25:40 compute-0 ceph-mon[74927]: 9.14 scrub starts
Nov 24 18:25:41 compute-0 ceph-mon[74927]: 9.14 scrub ok
Nov 24 18:25:41 compute-0 ceph-mon[74927]: 10.7 scrub starts
Nov 24 18:25:41 compute-0 ceph-mon[74927]: 10.7 scrub ok
Nov 24 18:25:41 compute-0 ceph-mon[74927]: pgmap v294: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:25:42 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v295: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:25:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:25:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:25:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:25:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:25:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:25:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:25:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:25:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:25:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:25:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:25:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:25:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 18:25:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:25:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:25:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:25:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 18:25:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:25:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 18:25:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:25:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:25:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:25:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 18:25:43 compute-0 ceph-mon[74927]: pgmap v295: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:44 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v296: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:45 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Nov 24 18:25:45 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Nov 24 18:25:45 compute-0 ceph-mon[74927]: pgmap v296: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:46 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Nov 24 18:25:46 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Nov 24 18:25:46 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v297: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:46 compute-0 ceph-mon[74927]: 10.4 scrub starts
Nov 24 18:25:46 compute-0 ceph-mon[74927]: 10.4 scrub ok
Nov 24 18:25:47 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Nov 24 18:25:47 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Nov 24 18:25:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:25:48 compute-0 ceph-mon[74927]: 10.8 scrub starts
Nov 24 18:25:48 compute-0 ceph-mon[74927]: 10.8 scrub ok
Nov 24 18:25:48 compute-0 ceph-mon[74927]: pgmap v297: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:48 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Nov 24 18:25:48 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:25:48.016868) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 18:25:48 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Nov 24 18:25:48 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008748017023, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7217, "num_deletes": 251, "total_data_size": 8863174, "memory_usage": 9069200, "flush_reason": "Manual Compaction"}
Nov 24 18:25:48 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Nov 24 18:25:48 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008748049643, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7143782, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 132, "largest_seqno": 7346, "table_properties": {"data_size": 7117496, "index_size": 17019, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8197, "raw_key_size": 75662, "raw_average_key_size": 23, "raw_value_size": 7055120, "raw_average_value_size": 2167, "num_data_blocks": 747, "num_entries": 3255, "num_filter_entries": 3255, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008327, "oldest_key_time": 1764008327, "file_creation_time": 1764008748, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Nov 24 18:25:48 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 32864 microseconds, and 14097 cpu microseconds.
Nov 24 18:25:48 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:25:48.049734) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7143782 bytes OK
Nov 24 18:25:48 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:25:48.049783) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Nov 24 18:25:48 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:25:48.051111) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Nov 24 18:25:48 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:25:48.051124) EVENT_LOG_v1 {"time_micros": 1764008748051120, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Nov 24 18:25:48 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:25:48.051140) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Nov 24 18:25:48 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 8831760, prev total WAL file size 8831760, number of live WAL files 2.
Nov 24 18:25:48 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:25:48 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:25:48.053175) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Nov 24 18:25:48 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Nov 24 18:25:48 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(6976KB) 13(50KB) 8(1944B)]
Nov 24 18:25:48 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008748053321, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7197520, "oldest_snapshot_seqno": -1}
Nov 24 18:25:48 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3067 keys, 7154591 bytes, temperature: kUnknown
Nov 24 18:25:48 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008748089967, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7154591, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7128810, "index_size": 17031, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7685, "raw_key_size": 73639, "raw_average_key_size": 24, "raw_value_size": 7068103, "raw_average_value_size": 2304, "num_data_blocks": 749, "num_entries": 3067, "num_filter_entries": 3067, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008324, "oldest_key_time": 0, "file_creation_time": 1764008748, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Nov 24 18:25:48 compute-0 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 18:25:48 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:25:48.090375) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7154591 bytes
Nov 24 18:25:48 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:25:48.091570) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 195.0 rd, 193.8 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(6.9, 0.0 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3356, records dropped: 289 output_compression: NoCompression
Nov 24 18:25:48 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:25:48.091591) EVENT_LOG_v1 {"time_micros": 1764008748091581, "job": 4, "event": "compaction_finished", "compaction_time_micros": 36910, "compaction_time_cpu_micros": 20479, "output_level": 6, "num_output_files": 1, "total_output_size": 7154591, "num_input_records": 3356, "num_output_records": 3067, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 18:25:48 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:25:48 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008748092986, "job": 4, "event": "table_file_deletion", "file_number": 19}
Nov 24 18:25:48 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:25:48 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008748093037, "job": 4, "event": "table_file_deletion", "file_number": 13}
Nov 24 18:25:48 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:25:48 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008748093192, "job": 4, "event": "table_file_deletion", "file_number": 8}
Nov 24 18:25:48 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:25:48.053040) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:25:48 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Nov 24 18:25:48 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Nov 24 18:25:48 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v298: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:49 compute-0 ceph-mon[74927]: 10.1 scrub starts
Nov 24 18:25:49 compute-0 ceph-mon[74927]: 10.1 scrub ok
Nov 24 18:25:50 compute-0 ceph-mon[74927]: 9.1a scrub starts
Nov 24 18:25:50 compute-0 ceph-mon[74927]: 9.1a scrub ok
Nov 24 18:25:50 compute-0 ceph-mon[74927]: pgmap v298: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:50 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Nov 24 18:25:50 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Nov 24 18:25:50 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v299: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:51 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.e scrub starts
Nov 24 18:25:51 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.e scrub ok
Nov 24 18:25:51 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Nov 24 18:25:51 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Nov 24 18:25:51 compute-0 sudo[115639]: pam_unix(sudo:session): session closed for user root
Nov 24 18:25:52 compute-0 ceph-mon[74927]: 11.5 scrub starts
Nov 24 18:25:52 compute-0 ceph-mon[74927]: 11.5 scrub ok
Nov 24 18:25:52 compute-0 ceph-mon[74927]: pgmap v299: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:52 compute-0 ceph-mon[74927]: 10.e scrub starts
Nov 24 18:25:52 compute-0 ceph-mon[74927]: 10.e scrub ok
Nov 24 18:25:52 compute-0 sshd-session[111486]: Connection closed by 192.168.122.30 port 45034
Nov 24 18:25:52 compute-0 sshd-session[111483]: pam_unix(sshd:session): session closed for user zuul
Nov 24 18:25:52 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Nov 24 18:25:52 compute-0 systemd[1]: session-35.scope: Consumed 28.334s CPU time.
Nov 24 18:25:52 compute-0 systemd-logind[822]: Session 35 logged out. Waiting for processes to exit.
Nov 24 18:25:52 compute-0 systemd-logind[822]: Removed session 35.
Nov 24 18:25:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:25:52 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Nov 24 18:25:52 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v300: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:52 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Nov 24 18:25:53 compute-0 ceph-mon[74927]: 8.1b scrub starts
Nov 24 18:25:53 compute-0 ceph-mon[74927]: 8.1b scrub ok
Nov 24 18:25:54 compute-0 ceph-mon[74927]: 11.7 scrub starts
Nov 24 18:25:54 compute-0 ceph-mon[74927]: pgmap v300: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:54 compute-0 ceph-mon[74927]: 11.7 scrub ok
Nov 24 18:25:54 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v301: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:54 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Nov 24 18:25:54 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Nov 24 18:25:55 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Nov 24 18:25:55 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Nov 24 18:25:56 compute-0 ceph-mon[74927]: pgmap v301: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:56 compute-0 ceph-mon[74927]: 11.12 scrub starts
Nov 24 18:25:56 compute-0 ceph-mon[74927]: 11.12 scrub ok
Nov 24 18:25:56 compute-0 ceph-mon[74927]: 10.15 scrub starts
Nov 24 18:25:56 compute-0 ceph-mon[74927]: 10.15 scrub ok
Nov 24 18:25:56 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Nov 24 18:25:56 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Nov 24 18:25:56 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v302: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:57 compute-0 ceph-mon[74927]: 10.17 scrub starts
Nov 24 18:25:57 compute-0 ceph-mon[74927]: 10.17 scrub ok
Nov 24 18:25:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:25:57 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Nov 24 18:25:57 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Nov 24 18:25:58 compute-0 ceph-mon[74927]: pgmap v302: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:58 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v303: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:25:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Nov 24 18:25:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Nov 24 18:25:59 compute-0 ceph-mon[74927]: 11.1f scrub starts
Nov 24 18:25:59 compute-0 ceph-mon[74927]: 11.1f scrub ok
Nov 24 18:25:59 compute-0 ceph-mon[74927]: 10.16 scrub starts
Nov 24 18:25:59 compute-0 ceph-mon[74927]: 10.16 scrub ok
Nov 24 18:26:00 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Nov 24 18:26:00 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Nov 24 18:26:00 compute-0 ceph-mon[74927]: pgmap v303: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:00 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v304: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:01 compute-0 ceph-mon[74927]: 11.1a scrub starts
Nov 24 18:26:01 compute-0 ceph-mon[74927]: 11.1a scrub ok
Nov 24 18:26:01 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Nov 24 18:26:01 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Nov 24 18:26:02 compute-0 ceph-mon[74927]: pgmap v304: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:02 compute-0 ceph-mon[74927]: 10.9 scrub starts
Nov 24 18:26:02 compute-0 ceph-mon[74927]: 10.9 scrub ok
Nov 24 18:26:02 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Nov 24 18:26:02 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Nov 24 18:26:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:26:02 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v305: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:03 compute-0 ceph-mon[74927]: 9.1d scrub starts
Nov 24 18:26:03 compute-0 ceph-mon[74927]: 9.1d scrub ok
Nov 24 18:26:03 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.a scrub starts
Nov 24 18:26:03 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.a scrub ok
Nov 24 18:26:04 compute-0 ceph-mon[74927]: pgmap v305: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:04 compute-0 ceph-mon[74927]: 11.a scrub starts
Nov 24 18:26:04 compute-0 ceph-mon[74927]: 11.a scrub ok
Nov 24 18:26:04 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v306: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:26:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:26:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:26:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:26:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:26:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:26:04 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.b scrub starts
Nov 24 18:26:04 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.b scrub ok
Nov 24 18:26:06 compute-0 ceph-mon[74927]: pgmap v306: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:06 compute-0 ceph-mon[74927]: 11.b scrub starts
Nov 24 18:26:06 compute-0 ceph-mon[74927]: 11.b scrub ok
Nov 24 18:26:06 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v307: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:06 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.c scrub starts
Nov 24 18:26:06 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.c scrub ok
Nov 24 18:26:07 compute-0 ceph-mon[74927]: 11.c scrub starts
Nov 24 18:26:07 compute-0 ceph-mon[74927]: 11.c scrub ok
Nov 24 18:26:07 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.11 deep-scrub starts
Nov 24 18:26:07 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.11 deep-scrub ok
Nov 24 18:26:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:26:08 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Nov 24 18:26:08 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Nov 24 18:26:08 compute-0 ceph-mon[74927]: pgmap v307: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:08 compute-0 ceph-mon[74927]: 9.11 deep-scrub starts
Nov 24 18:26:08 compute-0 ceph-mon[74927]: 9.11 deep-scrub ok
Nov 24 18:26:08 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v308: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:08 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Nov 24 18:26:08 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Nov 24 18:26:09 compute-0 ceph-mon[74927]: 8.1c scrub starts
Nov 24 18:26:09 compute-0 ceph-mon[74927]: 8.1c scrub ok
Nov 24 18:26:09 compute-0 ceph-mon[74927]: pgmap v308: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:09 compute-0 ceph-mon[74927]: 11.13 scrub starts
Nov 24 18:26:09 compute-0 ceph-mon[74927]: 11.13 scrub ok
Nov 24 18:26:09 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Nov 24 18:26:09 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Nov 24 18:26:09 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Nov 24 18:26:09 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Nov 24 18:26:10 compute-0 ceph-mon[74927]: 9.5 scrub starts
Nov 24 18:26:10 compute-0 ceph-mon[74927]: 9.5 scrub ok
Nov 24 18:26:10 compute-0 ceph-mon[74927]: 11.16 scrub starts
Nov 24 18:26:10 compute-0 ceph-mon[74927]: 11.16 scrub ok
Nov 24 18:26:10 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v309: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:10 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Nov 24 18:26:10 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Nov 24 18:26:11 compute-0 ceph-mon[74927]: pgmap v309: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:11 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.b scrub starts
Nov 24 18:26:11 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.b scrub ok
Nov 24 18:26:12 compute-0 ceph-mon[74927]: 11.1d scrub starts
Nov 24 18:26:12 compute-0 ceph-mon[74927]: 11.1d scrub ok
Nov 24 18:26:12 compute-0 ceph-mon[74927]: 9.b scrub starts
Nov 24 18:26:12 compute-0 ceph-mon[74927]: 9.b scrub ok
Nov 24 18:26:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:26:12 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v310: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:12 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.b scrub starts
Nov 24 18:26:12 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.b scrub ok
Nov 24 18:26:12 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Nov 24 18:26:12 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Nov 24 18:26:13 compute-0 ceph-mon[74927]: pgmap v310: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:13 compute-0 ceph-mon[74927]: 10.b scrub starts
Nov 24 18:26:13 compute-0 ceph-mon[74927]: 10.b scrub ok
Nov 24 18:26:13 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Nov 24 18:26:13 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Nov 24 18:26:13 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.e scrub starts
Nov 24 18:26:13 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.e scrub ok
Nov 24 18:26:14 compute-0 ceph-mon[74927]: 8.11 scrub starts
Nov 24 18:26:14 compute-0 ceph-mon[74927]: 8.11 scrub ok
Nov 24 18:26:14 compute-0 ceph-mon[74927]: 10.12 scrub starts
Nov 24 18:26:14 compute-0 ceph-mon[74927]: 10.12 scrub ok
Nov 24 18:26:14 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v311: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:14 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Nov 24 18:26:14 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Nov 24 18:26:15 compute-0 ceph-mon[74927]: 9.e scrub starts
Nov 24 18:26:15 compute-0 ceph-mon[74927]: 9.e scrub ok
Nov 24 18:26:15 compute-0 ceph-mon[74927]: pgmap v311: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:15 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Nov 24 18:26:15 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Nov 24 18:26:15 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Nov 24 18:26:15 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Nov 24 18:26:16 compute-0 ceph-mon[74927]: 9.6 scrub starts
Nov 24 18:26:16 compute-0 ceph-mon[74927]: 9.6 scrub ok
Nov 24 18:26:16 compute-0 ceph-mon[74927]: 10.13 scrub starts
Nov 24 18:26:16 compute-0 ceph-mon[74927]: 10.13 scrub ok
Nov 24 18:26:16 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v312: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:16 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.f scrub starts
Nov 24 18:26:16 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.f scrub ok
Nov 24 18:26:17 compute-0 ceph-mon[74927]: 9.17 scrub starts
Nov 24 18:26:17 compute-0 ceph-mon[74927]: 9.17 scrub ok
Nov 24 18:26:17 compute-0 ceph-mon[74927]: pgmap v312: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:17 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.d scrub starts
Nov 24 18:26:17 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.d scrub ok
Nov 24 18:26:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:26:17 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.7 deep-scrub starts
Nov 24 18:26:18 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.7 deep-scrub ok
Nov 24 18:26:18 compute-0 ceph-mon[74927]: 9.f scrub starts
Nov 24 18:26:18 compute-0 ceph-mon[74927]: 9.f scrub ok
Nov 24 18:26:18 compute-0 ceph-mon[74927]: 9.d scrub starts
Nov 24 18:26:18 compute-0 ceph-mon[74927]: 9.d scrub ok
Nov 24 18:26:18 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v313: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:18 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Nov 24 18:26:19 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Nov 24 18:26:19 compute-0 ceph-mon[74927]: 9.7 deep-scrub starts
Nov 24 18:26:19 compute-0 ceph-mon[74927]: 9.7 deep-scrub ok
Nov 24 18:26:19 compute-0 ceph-mon[74927]: pgmap v313: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:19 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Nov 24 18:26:19 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Nov 24 18:26:20 compute-0 ceph-mon[74927]: 9.8 scrub starts
Nov 24 18:26:20 compute-0 ceph-mon[74927]: 9.8 scrub ok
Nov 24 18:26:20 compute-0 ceph-mon[74927]: 9.3 scrub starts
Nov 24 18:26:20 compute-0 ceph-mon[74927]: 9.3 scrub ok
Nov 24 18:26:20 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v314: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:21 compute-0 ceph-mon[74927]: pgmap v314: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:21 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Nov 24 18:26:21 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Nov 24 18:26:22 compute-0 ceph-mon[74927]: 9.9 scrub starts
Nov 24 18:26:22 compute-0 ceph-mon[74927]: 9.9 scrub ok
Nov 24 18:26:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:26:22 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v315: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:22 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Nov 24 18:26:22 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Nov 24 18:26:22 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Nov 24 18:26:23 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Nov 24 18:26:23 compute-0 ceph-mon[74927]: pgmap v315: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:23 compute-0 ceph-mon[74927]: 10.1a scrub starts
Nov 24 18:26:23 compute-0 ceph-mon[74927]: 10.1a scrub ok
Nov 24 18:26:23 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Nov 24 18:26:23 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Nov 24 18:26:24 compute-0 ceph-mon[74927]: 9.18 scrub starts
Nov 24 18:26:24 compute-0 ceph-mon[74927]: 9.18 scrub ok
Nov 24 18:26:24 compute-0 ceph-mon[74927]: 9.1b scrub starts
Nov 24 18:26:24 compute-0 ceph-mon[74927]: 9.1b scrub ok
Nov 24 18:26:24 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v316: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:24 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Nov 24 18:26:24 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Nov 24 18:26:25 compute-0 sshd-session[116774]: Accepted publickey for zuul from 192.168.122.30 port 57818 ssh2: ECDSA SHA256:jrbRhTO0vQ5011EeK1ZbrK2vK+fcQyIDL9kUBqDHBYY
Nov 24 18:26:25 compute-0 systemd-logind[822]: New session 36 of user zuul.
Nov 24 18:26:25 compute-0 systemd[1]: Started Session 36 of User zuul.
Nov 24 18:26:25 compute-0 sshd-session[116774]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 18:26:25 compute-0 ceph-mon[74927]: pgmap v316: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:25 compute-0 ceph-mon[74927]: 10.10 scrub starts
Nov 24 18:26:25 compute-0 ceph-mon[74927]: 10.10 scrub ok
Nov 24 18:26:25 compute-0 python3.9[116927]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 24 18:26:26 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v317: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:27 compute-0 python3.9[117101]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:26:27 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.1 deep-scrub starts
Nov 24 18:26:27 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.1 deep-scrub ok
Nov 24 18:26:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:26:27 compute-0 ceph-mon[74927]: pgmap v317: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:27 compute-0 sudo[117255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-escxazlebazbsfxlsfaykmtvxzvnqxui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008787.5255718-45-101303659585771/AnsiballZ_command.py'
Nov 24 18:26:27 compute-0 sudo[117255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:26:28 compute-0 python3.9[117257]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:26:28 compute-0 sudo[117255]: pam_unix(sudo:session): session closed for user root
Nov 24 18:26:28 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v318: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:28 compute-0 ceph-mon[74927]: 9.1 deep-scrub starts
Nov 24 18:26:28 compute-0 ceph-mon[74927]: 9.1 deep-scrub ok
Nov 24 18:26:28 compute-0 sudo[117408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eukocxbjbggrfzmrmwtrvbppfepqalkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008788.502865-57-32855458478608/AnsiballZ_stat.py'
Nov 24 18:26:28 compute-0 sudo[117408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:26:29 compute-0 python3.9[117410]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:26:29 compute-0 sudo[117411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:26:29 compute-0 sudo[117411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:26:29 compute-0 sudo[117411]: pam_unix(sudo:session): session closed for user root
Nov 24 18:26:29 compute-0 sudo[117408]: pam_unix(sudo:session): session closed for user root
Nov 24 18:26:29 compute-0 sudo[117438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:26:29 compute-0 sudo[117438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:26:29 compute-0 sudo[117438]: pam_unix(sudo:session): session closed for user root
Nov 24 18:26:29 compute-0 sudo[117481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:26:29 compute-0 sudo[117481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:26:29 compute-0 sudo[117481]: pam_unix(sudo:session): session closed for user root
Nov 24 18:26:29 compute-0 sudo[117512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 18:26:29 compute-0 sudo[117512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:26:29 compute-0 ceph-mon[74927]: pgmap v318: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:29 compute-0 sudo[117512]: pam_unix(sudo:session): session closed for user root
Nov 24 18:26:29 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:26:29 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:26:29 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:26:29 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:26:29 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:26:29 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:26:29 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev a5dfd6e2-0832-4238-9b21-533da6bb36f2 does not exist
Nov 24 18:26:29 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev bb2dacfa-fe32-4b79-a423-ffbd7cf6707d does not exist
Nov 24 18:26:29 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev a4e28797-d880-485b-9efa-5a7b1eb4417a does not exist
Nov 24 18:26:29 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 18:26:29 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:26:29 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 18:26:29 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:26:29 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:26:29 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:26:29 compute-0 sudo[117643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:26:29 compute-0 sudo[117643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:26:29 compute-0 sudo[117643]: pam_unix(sudo:session): session closed for user root
Nov 24 18:26:29 compute-0 sudo[117689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:26:29 compute-0 sudo[117689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:26:29 compute-0 sudo[117689]: pam_unix(sudo:session): session closed for user root
Nov 24 18:26:29 compute-0 sudo[117751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjdpmeeiwsnxektcomdmvkwfknkymiwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008789.4390357-68-58411660020299/AnsiballZ_file.py'
Nov 24 18:26:29 compute-0 sudo[117751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:26:29 compute-0 sudo[117734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:26:29 compute-0 sudo[117734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:26:29 compute-0 sudo[117734]: pam_unix(sudo:session): session closed for user root
Nov 24 18:26:29 compute-0 sudo[117771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 18:26:29 compute-0 sudo[117771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:26:30 compute-0 python3.9[117768]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:26:30 compute-0 sudo[117751]: pam_unix(sudo:session): session closed for user root
Nov 24 18:26:30 compute-0 podman[117836]: 2025-11-24 18:26:30.257972155 +0000 UTC m=+0.036914107 container create d1382ffbd2a51c16b9b00d9dadc359272759d837983d94044b7470952cd3b22b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 24 18:26:30 compute-0 systemd[1]: Started libpod-conmon-d1382ffbd2a51c16b9b00d9dadc359272759d837983d94044b7470952cd3b22b.scope.
Nov 24 18:26:30 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Nov 24 18:26:30 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:26:30 compute-0 podman[117836]: 2025-11-24 18:26:30.330090365 +0000 UTC m=+0.109032367 container init d1382ffbd2a51c16b9b00d9dadc359272759d837983d94044b7470952cd3b22b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_roentgen, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 18:26:30 compute-0 podman[117836]: 2025-11-24 18:26:30.240892586 +0000 UTC m=+0.019834598 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:26:30 compute-0 podman[117836]: 2025-11-24 18:26:30.339587688 +0000 UTC m=+0.118529640 container start d1382ffbd2a51c16b9b00d9dadc359272759d837983d94044b7470952cd3b22b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_roentgen, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 18:26:30 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Nov 24 18:26:30 compute-0 podman[117836]: 2025-11-24 18:26:30.342585721 +0000 UTC m=+0.121527733 container attach d1382ffbd2a51c16b9b00d9dadc359272759d837983d94044b7470952cd3b22b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 24 18:26:30 compute-0 blissful_roentgen[117875]: 167 167
Nov 24 18:26:30 compute-0 systemd[1]: libpod-d1382ffbd2a51c16b9b00d9dadc359272759d837983d94044b7470952cd3b22b.scope: Deactivated successfully.
Nov 24 18:26:30 compute-0 conmon[117875]: conmon d1382ffbd2a51c16b9b0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d1382ffbd2a51c16b9b00d9dadc359272759d837983d94044b7470952cd3b22b.scope/container/memory.events
Nov 24 18:26:30 compute-0 podman[117836]: 2025-11-24 18:26:30.346027756 +0000 UTC m=+0.124969728 container died d1382ffbd2a51c16b9b00d9dadc359272759d837983d94044b7470952cd3b22b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_roentgen, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:26:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-7fe662a18a6a66779f369905408f134981e708514c545afbd5aaff33ee97be3f-merged.mount: Deactivated successfully.
Nov 24 18:26:30 compute-0 podman[117836]: 2025-11-24 18:26:30.400684267 +0000 UTC m=+0.179626219 container remove d1382ffbd2a51c16b9b00d9dadc359272759d837983d94044b7470952cd3b22b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 24 18:26:30 compute-0 systemd[1]: libpod-conmon-d1382ffbd2a51c16b9b00d9dadc359272759d837983d94044b7470952cd3b22b.scope: Deactivated successfully.
Nov 24 18:26:30 compute-0 podman[117975]: 2025-11-24 18:26:30.574879461 +0000 UTC m=+0.046522512 container create 41f45104d0dde4a117da85a9885002fa542a8562049a64e607a7d43359b6d7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_wilbur, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 24 18:26:30 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v319: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:30 compute-0 systemd[1]: Started libpod-conmon-41f45104d0dde4a117da85a9885002fa542a8562049a64e607a7d43359b6d7b6.scope.
Nov 24 18:26:30 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:26:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d28baaf2c7b31f3267068eba2086cfbe674bb2a0505b7e63f88ad75b4aa5d0b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:26:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d28baaf2c7b31f3267068eba2086cfbe674bb2a0505b7e63f88ad75b4aa5d0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:26:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d28baaf2c7b31f3267068eba2086cfbe674bb2a0505b7e63f88ad75b4aa5d0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:26:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d28baaf2c7b31f3267068eba2086cfbe674bb2a0505b7e63f88ad75b4aa5d0b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:26:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d28baaf2c7b31f3267068eba2086cfbe674bb2a0505b7e63f88ad75b4aa5d0b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:26:30 compute-0 podman[117975]: 2025-11-24 18:26:30.559305889 +0000 UTC m=+0.030948960 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:26:30 compute-0 podman[117975]: 2025-11-24 18:26:30.658277118 +0000 UTC m=+0.129920189 container init 41f45104d0dde4a117da85a9885002fa542a8562049a64e607a7d43359b6d7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:26:30 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:26:30 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:26:30 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:26:30 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:26:30 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:26:30 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:26:30 compute-0 podman[117975]: 2025-11-24 18:26:30.670091568 +0000 UTC m=+0.141734619 container start 41f45104d0dde4a117da85a9885002fa542a8562049a64e607a7d43359b6d7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_wilbur, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 24 18:26:30 compute-0 podman[117975]: 2025-11-24 18:26:30.673313557 +0000 UTC m=+0.144956628 container attach 41f45104d0dde4a117da85a9885002fa542a8562049a64e607a7d43359b6d7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 18:26:30 compute-0 sudo[118048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxovmezgnaoacotmweoscjbihipqdwlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008790.381411-77-98494012969828/AnsiballZ_file.py'
Nov 24 18:26:30 compute-0 sudo[118048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:26:30 compute-0 python3.9[118051]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:26:30 compute-0 sudo[118048]: pam_unix(sudo:session): session closed for user root
Nov 24 18:26:30 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.c deep-scrub starts
Nov 24 18:26:30 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.c deep-scrub ok
Nov 24 18:26:31 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Nov 24 18:26:31 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Nov 24 18:26:31 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.19 deep-scrub starts
Nov 24 18:26:31 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.19 deep-scrub ok
Nov 24 18:26:31 compute-0 python3.9[118214]: ansible-ansible.builtin.service_facts Invoked
Nov 24 18:26:31 compute-0 nostalgic_wilbur[118018]: --> passed data devices: 0 physical, 3 LVM
Nov 24 18:26:31 compute-0 nostalgic_wilbur[118018]: --> relative data size: 1.0
Nov 24 18:26:31 compute-0 nostalgic_wilbur[118018]: --> All data devices are unavailable
Nov 24 18:26:31 compute-0 network[118242]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 18:26:31 compute-0 network[118243]: 'network-scripts' will be removed from distribution in near future.
Nov 24 18:26:31 compute-0 network[118244]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 18:26:31 compute-0 systemd[1]: libpod-41f45104d0dde4a117da85a9885002fa542a8562049a64e607a7d43359b6d7b6.scope: Deactivated successfully.
Nov 24 18:26:31 compute-0 podman[117975]: 2025-11-24 18:26:31.749419012 +0000 UTC m=+1.221062063 container died 41f45104d0dde4a117da85a9885002fa542a8562049a64e607a7d43359b6d7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_wilbur, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:26:31 compute-0 ceph-mon[74927]: 9.16 scrub starts
Nov 24 18:26:31 compute-0 ceph-mon[74927]: 9.16 scrub ok
Nov 24 18:26:31 compute-0 ceph-mon[74927]: pgmap v319: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:31 compute-0 ceph-mon[74927]: 9.c deep-scrub starts
Nov 24 18:26:31 compute-0 ceph-mon[74927]: 9.c deep-scrub ok
Nov 24 18:26:31 compute-0 ceph-mon[74927]: 10.19 deep-scrub starts
Nov 24 18:26:31 compute-0 ceph-mon[74927]: 10.19 deep-scrub ok
Nov 24 18:26:31 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Nov 24 18:26:31 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Nov 24 18:26:32 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Nov 24 18:26:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d28baaf2c7b31f3267068eba2086cfbe674bb2a0505b7e63f88ad75b4aa5d0b-merged.mount: Deactivated successfully.
Nov 24 18:26:32 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Nov 24 18:26:32 compute-0 podman[117975]: 2025-11-24 18:26:32.412954942 +0000 UTC m=+1.884597993 container remove 41f45104d0dde4a117da85a9885002fa542a8562049a64e607a7d43359b6d7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_wilbur, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 18:26:32 compute-0 systemd[1]: libpod-conmon-41f45104d0dde4a117da85a9885002fa542a8562049a64e607a7d43359b6d7b6.scope: Deactivated successfully.
Nov 24 18:26:32 compute-0 sudo[117771]: pam_unix(sudo:session): session closed for user root
Nov 24 18:26:32 compute-0 sudo[118269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:26:32 compute-0 sudo[118269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:26:32 compute-0 sudo[118269]: pam_unix(sudo:session): session closed for user root
Nov 24 18:26:32 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:26:32 compute-0 sudo[118297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:26:32 compute-0 sudo[118297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:26:32 compute-0 sudo[118297]: pam_unix(sudo:session): session closed for user root
Nov 24 18:26:32 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v320: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:32 compute-0 sudo[118325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:26:32 compute-0 sudo[118325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:26:32 compute-0 sudo[118325]: pam_unix(sudo:session): session closed for user root
Nov 24 18:26:32 compute-0 sudo[118353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- lvm list --format json
Nov 24 18:26:32 compute-0 sudo[118353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:26:32 compute-0 ceph-mon[74927]: 9.1c scrub starts
Nov 24 18:26:32 compute-0 ceph-mon[74927]: 9.1c scrub ok
Nov 24 18:26:32 compute-0 ceph-mon[74927]: 9.13 scrub starts
Nov 24 18:26:32 compute-0 ceph-mon[74927]: 9.13 scrub ok
Nov 24 18:26:32 compute-0 podman[118433]: 2025-11-24 18:26:32.94835505 +0000 UTC m=+0.038719991 container create cb81d8f4060bd95b807f6006092e531f95a9770d8890266e187847a40cc5449d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_benz, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:26:32 compute-0 systemd[1]: Started libpod-conmon-cb81d8f4060bd95b807f6006092e531f95a9770d8890266e187847a40cc5449d.scope.
Nov 24 18:26:33 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:26:33 compute-0 podman[118433]: 2025-11-24 18:26:33.02049337 +0000 UTC m=+0.110858311 container init cb81d8f4060bd95b807f6006092e531f95a9770d8890266e187847a40cc5449d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_benz, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:26:33 compute-0 podman[118433]: 2025-11-24 18:26:32.928425091 +0000 UTC m=+0.018790062 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:26:33 compute-0 podman[118433]: 2025-11-24 18:26:33.029205694 +0000 UTC m=+0.119570635 container start cb81d8f4060bd95b807f6006092e531f95a9770d8890266e187847a40cc5449d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:26:33 compute-0 podman[118433]: 2025-11-24 18:26:33.032386832 +0000 UTC m=+0.122751773 container attach cb81d8f4060bd95b807f6006092e531f95a9770d8890266e187847a40cc5449d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_benz, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 24 18:26:33 compute-0 heuristic_benz[118453]: 167 167
Nov 24 18:26:33 compute-0 systemd[1]: libpod-cb81d8f4060bd95b807f6006092e531f95a9770d8890266e187847a40cc5449d.scope: Deactivated successfully.
Nov 24 18:26:33 compute-0 conmon[118453]: conmon cb81d8f4060bd95b807f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cb81d8f4060bd95b807f6006092e531f95a9770d8890266e187847a40cc5449d.scope/container/memory.events
Nov 24 18:26:33 compute-0 podman[118433]: 2025-11-24 18:26:33.035799026 +0000 UTC m=+0.126163967 container died cb81d8f4060bd95b807f6006092e531f95a9770d8890266e187847a40cc5449d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 18:26:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-f611dedd19e3f730f5612dd837fb7187c1fa4b39b90fc05b36b4508ad046f1da-merged.mount: Deactivated successfully.
Nov 24 18:26:33 compute-0 podman[118433]: 2025-11-24 18:26:33.080738338 +0000 UTC m=+0.171103269 container remove cb81d8f4060bd95b807f6006092e531f95a9770d8890266e187847a40cc5449d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 24 18:26:33 compute-0 systemd[1]: libpod-conmon-cb81d8f4060bd95b807f6006092e531f95a9770d8890266e187847a40cc5449d.scope: Deactivated successfully.
Nov 24 18:26:33 compute-0 podman[118486]: 2025-11-24 18:26:33.236180412 +0000 UTC m=+0.040941775 container create 55d6f1751a43ae7a4e2f5fa27c9b10ae1487ce40ed7db967b5cadd2f3198cdaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wilbur, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:26:33 compute-0 systemd[1]: Started libpod-conmon-55d6f1751a43ae7a4e2f5fa27c9b10ae1487ce40ed7db967b5cadd2f3198cdaf.scope.
Nov 24 18:26:33 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:26:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ed966ea4da2d7a6c7a9c4ab0602377c3b1de71a479fabfa20c2cca3e5d9cb9b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:26:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ed966ea4da2d7a6c7a9c4ab0602377c3b1de71a479fabfa20c2cca3e5d9cb9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:26:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ed966ea4da2d7a6c7a9c4ab0602377c3b1de71a479fabfa20c2cca3e5d9cb9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:26:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ed966ea4da2d7a6c7a9c4ab0602377c3b1de71a479fabfa20c2cca3e5d9cb9b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:26:33 compute-0 podman[118486]: 2025-11-24 18:26:33.218822916 +0000 UTC m=+0.023584319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:26:33 compute-0 podman[118486]: 2025-11-24 18:26:33.320418589 +0000 UTC m=+0.125179972 container init 55d6f1751a43ae7a4e2f5fa27c9b10ae1487ce40ed7db967b5cadd2f3198cdaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:26:33 compute-0 podman[118486]: 2025-11-24 18:26:33.327895463 +0000 UTC m=+0.132656826 container start 55d6f1751a43ae7a4e2f5fa27c9b10ae1487ce40ed7db967b5cadd2f3198cdaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 24 18:26:33 compute-0 podman[118486]: 2025-11-24 18:26:33.330564088 +0000 UTC m=+0.135325471 container attach 55d6f1751a43ae7a4e2f5fa27c9b10ae1487ce40ed7db967b5cadd2f3198cdaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 24 18:26:33 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Nov 24 18:26:33 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Nov 24 18:26:33 compute-0 ceph-mon[74927]: 9.1e scrub starts
Nov 24 18:26:33 compute-0 ceph-mon[74927]: 9.1e scrub ok
Nov 24 18:26:33 compute-0 ceph-mon[74927]: pgmap v320: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:33 compute-0 ceph-mon[74927]: 10.6 scrub starts
Nov 24 18:26:33 compute-0 ceph-mon[74927]: 10.6 scrub ok
Nov 24 18:26:34 compute-0 modest_wilbur[118506]: {
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:     "0": [
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:         {
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:             "devices": [
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:                 "/dev/loop3"
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:             ],
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:             "lv_name": "ceph_lv0",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:             "lv_size": "21470642176",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:             "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:             "name": "ceph_lv0",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:             "tags": {
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:                 "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:                 "ceph.cluster_name": "ceph",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:                 "ceph.crush_device_class": "",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:                 "ceph.encrypted": "0",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:                 "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:                 "ceph.osd_id": "0",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:                 "ceph.type": "block",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:                 "ceph.vdo": "0"
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:             },
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:             "type": "block",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:             "vg_name": "ceph_vg0"
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:         }
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:     ],
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:     "1": [
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:         {
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:             "devices": [
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:                 "/dev/loop4"
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:             ],
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:             "lv_name": "ceph_lv1",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:             "lv_size": "21470642176",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:             "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:             "name": "ceph_lv1",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:             "tags": {
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:                 "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:                 "ceph.cluster_name": "ceph",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:                 "ceph.crush_device_class": "",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:                 "ceph.encrypted": "0",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:                 "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:                 "ceph.osd_id": "1",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:                 "ceph.type": "block",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:                 "ceph.vdo": "0"
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:             },
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:             "type": "block",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:             "vg_name": "ceph_vg1"
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:         }
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:     ],
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:     "2": [
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:         {
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:             "devices": [
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:                 "/dev/loop5"
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:             ],
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:             "lv_name": "ceph_lv2",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:             "lv_size": "21470642176",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:             "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:             "name": "ceph_lv2",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:             "tags": {
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:                 "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:                 "ceph.cluster_name": "ceph",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:                 "ceph.crush_device_class": "",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:                 "ceph.encrypted": "0",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:                 "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:                 "ceph.osd_id": "2",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:                 "ceph.type": "block",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:                 "ceph.vdo": "0"
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:             },
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:             "type": "block",
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:             "vg_name": "ceph_vg2"
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:         }
Nov 24 18:26:34 compute-0 modest_wilbur[118506]:     ]
Nov 24 18:26:34 compute-0 modest_wilbur[118506]: }
Nov 24 18:26:34 compute-0 systemd[1]: libpod-55d6f1751a43ae7a4e2f5fa27c9b10ae1487ce40ed7db967b5cadd2f3198cdaf.scope: Deactivated successfully.
Nov 24 18:26:34 compute-0 podman[118523]: 2025-11-24 18:26:34.093053968 +0000 UTC m=+0.028681435 container died 55d6f1751a43ae7a4e2f5fa27c9b10ae1487ce40ed7db967b5cadd2f3198cdaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wilbur, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:26:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ed966ea4da2d7a6c7a9c4ab0602377c3b1de71a479fabfa20c2cca3e5d9cb9b-merged.mount: Deactivated successfully.
Nov 24 18:26:34 compute-0 podman[118523]: 2025-11-24 18:26:34.154000404 +0000 UTC m=+0.089627841 container remove 55d6f1751a43ae7a4e2f5fa27c9b10ae1487ce40ed7db967b5cadd2f3198cdaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wilbur, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 24 18:26:34 compute-0 systemd[1]: libpod-conmon-55d6f1751a43ae7a4e2f5fa27c9b10ae1487ce40ed7db967b5cadd2f3198cdaf.scope: Deactivated successfully.
Nov 24 18:26:34 compute-0 sudo[118353]: pam_unix(sudo:session): session closed for user root
Nov 24 18:26:34 compute-0 sudo[118538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:26:34 compute-0 sudo[118538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:26:34 compute-0 sudo[118538]: pam_unix(sudo:session): session closed for user root
Nov 24 18:26:34 compute-0 sudo[118563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:26:34 compute-0 sudo[118563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:26:34 compute-0 sudo[118563]: pam_unix(sudo:session): session closed for user root
Nov 24 18:26:34 compute-0 sudo[118588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:26:34 compute-0 sudo[118588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:26:34 compute-0 sudo[118588]: pam_unix(sudo:session): session closed for user root
Nov 24 18:26:34 compute-0 sudo[118616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- raw list --format json
Nov 24 18:26:34 compute-0 sudo[118616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:26:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:26:34
Nov 24 18:26:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:26:34 compute-0 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 18:26:34 compute-0 ceph-mgr[75218]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'backups', 'vms', '.rgw.root', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'default.rgw.log']
Nov 24 18:26:34 compute-0 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 18:26:34 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Nov 24 18:26:34 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v321: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:34 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Nov 24 18:26:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:26:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:26:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:26:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:26:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:26:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:26:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:26:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:26:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:26:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:26:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:26:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:26:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:26:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:26:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:26:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:26:34 compute-0 ceph-mon[74927]: 10.2 scrub starts
Nov 24 18:26:34 compute-0 ceph-mon[74927]: 10.2 scrub ok
Nov 24 18:26:34 compute-0 podman[118699]: 2025-11-24 18:26:34.807110409 +0000 UTC m=+0.045855606 container create 3160e29c86c4fdd87ecea913fb9f162aaa3e9d8643733f5f21711e661438bbdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:26:34 compute-0 systemd[1]: Started libpod-conmon-3160e29c86c4fdd87ecea913fb9f162aaa3e9d8643733f5f21711e661438bbdd.scope.
Nov 24 18:26:34 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:26:34 compute-0 podman[118699]: 2025-11-24 18:26:34.872592406 +0000 UTC m=+0.111337613 container init 3160e29c86c4fdd87ecea913fb9f162aaa3e9d8643733f5f21711e661438bbdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:26:34 compute-0 podman[118699]: 2025-11-24 18:26:34.78349019 +0000 UTC m=+0.022235427 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:26:34 compute-0 podman[118699]: 2025-11-24 18:26:34.881050013 +0000 UTC m=+0.119795200 container start 3160e29c86c4fdd87ecea913fb9f162aaa3e9d8643733f5f21711e661438bbdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:26:34 compute-0 condescending_ishizaka[118720]: 167 167
Nov 24 18:26:34 compute-0 systemd[1]: libpod-3160e29c86c4fdd87ecea913fb9f162aaa3e9d8643733f5f21711e661438bbdd.scope: Deactivated successfully.
Nov 24 18:26:34 compute-0 podman[118699]: 2025-11-24 18:26:34.896752879 +0000 UTC m=+0.135498086 container attach 3160e29c86c4fdd87ecea913fb9f162aaa3e9d8643733f5f21711e661438bbdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ishizaka, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:26:34 compute-0 podman[118699]: 2025-11-24 18:26:34.897743313 +0000 UTC m=+0.136488520 container died 3160e29c86c4fdd87ecea913fb9f162aaa3e9d8643733f5f21711e661438bbdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ishizaka, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:26:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a2ede3ee68537e0ffb109bb8623a322f3775189baf07d8bed53f43406422ea1-merged.mount: Deactivated successfully.
Nov 24 18:26:34 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Nov 24 18:26:34 compute-0 podman[118699]: 2025-11-24 18:26:34.93064547 +0000 UTC m=+0.169390657 container remove 3160e29c86c4fdd87ecea913fb9f162aaa3e9d8643733f5f21711e661438bbdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Nov 24 18:26:34 compute-0 systemd[1]: libpod-conmon-3160e29c86c4fdd87ecea913fb9f162aaa3e9d8643733f5f21711e661438bbdd.scope: Deactivated successfully.
Nov 24 18:26:34 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Nov 24 18:26:35 compute-0 podman[118753]: 2025-11-24 18:26:35.0809875 +0000 UTC m=+0.049677940 container create 5746f6c502a6640b28fd557d5709d03fc5c98185a3bb6bd9dc01e42e752ba20f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_napier, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 18:26:35 compute-0 systemd[1]: Started libpod-conmon-5746f6c502a6640b28fd557d5709d03fc5c98185a3bb6bd9dc01e42e752ba20f.scope.
Nov 24 18:26:35 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:26:35 compute-0 podman[118753]: 2025-11-24 18:26:35.053471874 +0000 UTC m=+0.022162324 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:26:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06e0c55129496b2dc4652db99bb6d8f18e2494d395bfef6c2120933ccea58b8e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:26:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06e0c55129496b2dc4652db99bb6d8f18e2494d395bfef6c2120933ccea58b8e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:26:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06e0c55129496b2dc4652db99bb6d8f18e2494d395bfef6c2120933ccea58b8e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:26:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06e0c55129496b2dc4652db99bb6d8f18e2494d395bfef6c2120933ccea58b8e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:26:35 compute-0 podman[118753]: 2025-11-24 18:26:35.172539966 +0000 UTC m=+0.141230406 container init 5746f6c502a6640b28fd557d5709d03fc5c98185a3bb6bd9dc01e42e752ba20f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_napier, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 18:26:35 compute-0 podman[118753]: 2025-11-24 18:26:35.179114957 +0000 UTC m=+0.147805387 container start 5746f6c502a6640b28fd557d5709d03fc5c98185a3bb6bd9dc01e42e752ba20f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:26:35 compute-0 podman[118753]: 2025-11-24 18:26:35.182339906 +0000 UTC m=+0.151030366 container attach 5746f6c502a6640b28fd557d5709d03fc5c98185a3bb6bd9dc01e42e752ba20f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:26:35 compute-0 ceph-mon[74927]: pgmap v321: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:35 compute-0 ceph-mon[74927]: 9.19 scrub starts
Nov 24 18:26:35 compute-0 ceph-mon[74927]: 9.19 scrub ok
Nov 24 18:26:36 compute-0 competent_napier[118776]: {
Nov 24 18:26:36 compute-0 competent_napier[118776]:     "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 18:26:36 compute-0 competent_napier[118776]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:26:36 compute-0 competent_napier[118776]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 18:26:36 compute-0 competent_napier[118776]:         "osd_id": 0,
Nov 24 18:26:36 compute-0 competent_napier[118776]:         "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:26:36 compute-0 competent_napier[118776]:         "type": "bluestore"
Nov 24 18:26:36 compute-0 competent_napier[118776]:     },
Nov 24 18:26:36 compute-0 competent_napier[118776]:     "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 18:26:36 compute-0 competent_napier[118776]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:26:36 compute-0 competent_napier[118776]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 18:26:36 compute-0 competent_napier[118776]:         "osd_id": 1,
Nov 24 18:26:36 compute-0 competent_napier[118776]:         "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:26:36 compute-0 competent_napier[118776]:         "type": "bluestore"
Nov 24 18:26:36 compute-0 competent_napier[118776]:     },
Nov 24 18:26:36 compute-0 competent_napier[118776]:     "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 18:26:36 compute-0 competent_napier[118776]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:26:36 compute-0 competent_napier[118776]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 18:26:36 compute-0 competent_napier[118776]:         "osd_id": 2,
Nov 24 18:26:36 compute-0 competent_napier[118776]:         "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:26:36 compute-0 competent_napier[118776]:         "type": "bluestore"
Nov 24 18:26:36 compute-0 competent_napier[118776]:     }
Nov 24 18:26:36 compute-0 competent_napier[118776]: }
Nov 24 18:26:36 compute-0 python3.9[118948]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:26:36 compute-0 systemd[1]: libpod-5746f6c502a6640b28fd557d5709d03fc5c98185a3bb6bd9dc01e42e752ba20f.scope: Deactivated successfully.
Nov 24 18:26:36 compute-0 podman[118753]: 2025-11-24 18:26:36.085850265 +0000 UTC m=+1.054540695 container died 5746f6c502a6640b28fd557d5709d03fc5c98185a3bb6bd9dc01e42e752ba20f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:26:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-06e0c55129496b2dc4652db99bb6d8f18e2494d395bfef6c2120933ccea58b8e-merged.mount: Deactivated successfully.
Nov 24 18:26:36 compute-0 podman[118753]: 2025-11-24 18:26:36.142075375 +0000 UTC m=+1.110765815 container remove 5746f6c502a6640b28fd557d5709d03fc5c98185a3bb6bd9dc01e42e752ba20f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 18:26:36 compute-0 systemd[1]: libpod-conmon-5746f6c502a6640b28fd557d5709d03fc5c98185a3bb6bd9dc01e42e752ba20f.scope: Deactivated successfully.
Nov 24 18:26:36 compute-0 sudo[118616]: pam_unix(sudo:session): session closed for user root
Nov 24 18:26:36 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:26:36 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:26:36 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:26:36 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:26:36 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 8c02dc4a-4c03-447e-8e35-d39b3918de36 does not exist
Nov 24 18:26:36 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev d304f9b0-362a-4392-952b-b4f747cd8db1 does not exist
Nov 24 18:26:36 compute-0 sudo[119002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:26:36 compute-0 sudo[119002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:26:36 compute-0 sudo[119002]: pam_unix(sudo:session): session closed for user root
Nov 24 18:26:36 compute-0 sudo[119031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:26:36 compute-0 sudo[119031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:26:36 compute-0 sudo[119031]: pam_unix(sudo:session): session closed for user root
Nov 24 18:26:36 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Nov 24 18:26:36 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Nov 24 18:26:36 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v322: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:36 compute-0 python3.9[119177]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:26:37 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:26:37 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:26:37 compute-0 ceph-mon[74927]: 10.14 scrub starts
Nov 24 18:26:37 compute-0 ceph-mon[74927]: 10.14 scrub ok
Nov 24 18:26:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:26:37 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Nov 24 18:26:37 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Nov 24 18:26:37 compute-0 python3.9[119331]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:26:38 compute-0 ceph-mon[74927]: pgmap v322: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:38 compute-0 ceph-mon[74927]: 10.11 scrub starts
Nov 24 18:26:38 compute-0 ceph-mon[74927]: 10.11 scrub ok
Nov 24 18:26:38 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v323: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:38 compute-0 sudo[119487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emwimoensvmjxtocsdvnqnryhqwwnctz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008798.3793616-125-31411584688764/AnsiballZ_setup.py'
Nov 24 18:26:38 compute-0 sudo[119487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:26:38 compute-0 python3.9[119489]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 18:26:39 compute-0 sudo[119487]: pam_unix(sudo:session): session closed for user root
Nov 24 18:26:39 compute-0 ceph-mon[74927]: pgmap v323: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:39 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.f scrub starts
Nov 24 18:26:39 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.f scrub ok
Nov 24 18:26:39 compute-0 sudo[119571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azrhaixyxudihubordyvcsqdpzgacqyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008798.3793616-125-31411584688764/AnsiballZ_dnf.py'
Nov 24 18:26:39 compute-0 sudo[119571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:26:39 compute-0 python3.9[119573]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 18:26:40 compute-0 ceph-mon[74927]: 10.f scrub starts
Nov 24 18:26:40 compute-0 ceph-mon[74927]: 10.f scrub ok
Nov 24 18:26:40 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Nov 24 18:26:40 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v324: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:40 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Nov 24 18:26:41 compute-0 ceph-mon[74927]: 9.15 scrub starts
Nov 24 18:26:41 compute-0 ceph-mon[74927]: pgmap v324: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:41 compute-0 ceph-mon[74927]: 9.15 scrub ok
Nov 24 18:26:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:26:42 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v325: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:42 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:26:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:26:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:26:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:26:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:26:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:26:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:26:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:26:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:26:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:26:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:26:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:26:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 18:26:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:26:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:26:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:26:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 18:26:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:26:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 18:26:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:26:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:26:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:26:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 18:26:43 compute-0 ceph-mon[74927]: pgmap v325: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:44 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v326: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:45 compute-0 ceph-mon[74927]: pgmap v326: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:46 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v327: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:26:47 compute-0 ceph-mon[74927]: pgmap v327: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:48 compute-0 sudo[119571]: pam_unix(sudo:session): session closed for user root
Nov 24 18:26:48 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v328: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:48 compute-0 sudo[119768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuutgmkdourtuwtwvqswkcfdsmgftpus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008808.5342739-137-94884498410555/AnsiballZ_command.py'
Nov 24 18:26:48 compute-0 sudo[119768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:26:49 compute-0 python3.9[119770]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:26:49 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.1f deep-scrub starts
Nov 24 18:26:49 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.1f deep-scrub ok
Nov 24 18:26:49 compute-0 ceph-mon[74927]: pgmap v328: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:49 compute-0 sudo[119768]: pam_unix(sudo:session): session closed for user root
Nov 24 18:26:50 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v329: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:50 compute-0 sudo[120055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gggibffkketgsaqhymddxpietihrnwif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008809.984186-145-46652195220033/AnsiballZ_selinux.py'
Nov 24 18:26:50 compute-0 sudo[120055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:26:50 compute-0 ceph-mon[74927]: 9.1f deep-scrub starts
Nov 24 18:26:50 compute-0 ceph-mon[74927]: 9.1f deep-scrub ok
Nov 24 18:26:50 compute-0 python3.9[120057]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 24 18:26:50 compute-0 sudo[120055]: pam_unix(sudo:session): session closed for user root
Nov 24 18:26:51 compute-0 ceph-mon[74927]: pgmap v329: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:51 compute-0 sudo[120207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqpkxnygeovmqhdyjgkzfzzxctgeicpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008811.4028842-156-119053791203734/AnsiballZ_command.py'
Nov 24 18:26:51 compute-0 sudo[120207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:26:51 compute-0 python3.9[120209]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 24 18:26:51 compute-0 sudo[120207]: pam_unix(sudo:session): session closed for user root
Nov 24 18:26:52 compute-0 sudo[120359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxinpykokarxlkppcaodqntkwtkxfiiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008812.1157417-164-44537942853887/AnsiballZ_file.py'
Nov 24 18:26:52 compute-0 sudo[120359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:26:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:26:52 compute-0 python3.9[120361]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:26:52 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v330: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:52 compute-0 sudo[120359]: pam_unix(sudo:session): session closed for user root
Nov 24 18:26:53 compute-0 sudo[120511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxondekffhbzmaypwyywwfekdmhjuayr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008812.7571034-172-127750876716492/AnsiballZ_mount.py'
Nov 24 18:26:53 compute-0 sudo[120511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:26:53 compute-0 python3.9[120513]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 24 18:26:53 compute-0 sudo[120511]: pam_unix(sudo:session): session closed for user root
Nov 24 18:26:53 compute-0 ceph-mon[74927]: pgmap v330: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:54 compute-0 sudo[120663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvmrejhanlvcculjnkzwicjpxawctbaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008814.1771164-200-129641912145182/AnsiballZ_file.py'
Nov 24 18:26:54 compute-0 sudo[120663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:26:54 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v331: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:54 compute-0 python3.9[120665]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:26:54 compute-0 sudo[120663]: pam_unix(sudo:session): session closed for user root
Nov 24 18:26:55 compute-0 sudo[120815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbxapyovixnqmlwwitnvogzejqziqmkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008814.8075988-208-96351938047349/AnsiballZ_stat.py'
Nov 24 18:26:55 compute-0 sudo[120815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:26:55 compute-0 python3.9[120817]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:26:55 compute-0 sudo[120815]: pam_unix(sudo:session): session closed for user root
Nov 24 18:26:55 compute-0 sudo[120893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-covetnppbhnwcdqorhhjjaoszuuzpkmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008814.8075988-208-96351938047349/AnsiballZ_file.py'
Nov 24 18:26:55 compute-0 sudo[120893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:26:55 compute-0 ceph-mon[74927]: pgmap v331: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:55 compute-0 python3.9[120895]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:26:55 compute-0 sudo[120893]: pam_unix(sudo:session): session closed for user root
Nov 24 18:26:56 compute-0 sudo[121045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzzgmxiklatcvferwmryilcqavkkmnqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008816.2137384-229-14027939651654/AnsiballZ_stat.py'
Nov 24 18:26:56 compute-0 sudo[121045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:26:56 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v332: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:56 compute-0 python3.9[121047]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:26:56 compute-0 sudo[121045]: pam_unix(sudo:session): session closed for user root
Nov 24 18:26:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:26:57 compute-0 sudo[121199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcemvjtcvmxeazifznjytbomdxkjkvuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008817.1752033-242-268862839275488/AnsiballZ_getent.py'
Nov 24 18:26:57 compute-0 sudo[121199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:26:57 compute-0 ceph-mon[74927]: pgmap v332: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:57 compute-0 python3.9[121201]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 24 18:26:57 compute-0 sudo[121199]: pam_unix(sudo:session): session closed for user root
Nov 24 18:26:58 compute-0 sudo[121352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcaszleqltvrigqoffnsstqosgqatuva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008818.0143394-252-251774647995943/AnsiballZ_getent.py'
Nov 24 18:26:58 compute-0 sudo[121352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:26:58 compute-0 python3.9[121354]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 24 18:26:58 compute-0 sudo[121352]: pam_unix(sudo:session): session closed for user root
Nov 24 18:26:58 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v333: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:59 compute-0 sudo[121505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzncynvsrzgixkowhqdhpsqapakzyadh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008818.6592429-260-39581837646239/AnsiballZ_group.py'
Nov 24 18:26:59 compute-0 sudo[121505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:26:59 compute-0 python3.9[121507]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 18:26:59 compute-0 sudo[121505]: pam_unix(sudo:session): session closed for user root
Nov 24 18:26:59 compute-0 ceph-mon[74927]: pgmap v333: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:26:59 compute-0 sudo[121657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xitsiedstnxumargmeutjeqlfrotpysk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008819.5081832-269-2996882203266/AnsiballZ_file.py'
Nov 24 18:26:59 compute-0 sudo[121657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:27:00 compute-0 python3.9[121659]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 24 18:27:00 compute-0 sudo[121657]: pam_unix(sudo:session): session closed for user root
Nov 24 18:27:00 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v334: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:00 compute-0 sudo[121809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybdckdanncabweidmisoalgkieegfpxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008820.3790958-280-174396701976679/AnsiballZ_dnf.py'
Nov 24 18:27:00 compute-0 sudo[121809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:27:00 compute-0 python3.9[121811]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 18:27:01 compute-0 ceph-mon[74927]: pgmap v334: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:27:02 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v335: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:03 compute-0 ceph-mon[74927]: pgmap v335: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:04 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v336: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:27:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:27:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:27:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:27:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:27:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:27:05 compute-0 ceph-mon[74927]: pgmap v336: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:06 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v337: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:27:07 compute-0 ceph-mon[74927]: pgmap v337: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:08 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v338: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:08 compute-0 sudo[121809]: pam_unix(sudo:session): session closed for user root
Nov 24 18:27:09 compute-0 sshd-session[116777]: Connection closed by 192.168.122.30 port 57818
Nov 24 18:27:09 compute-0 sshd-session[116774]: pam_unix(sshd:session): session closed for user zuul
Nov 24 18:27:09 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Nov 24 18:27:09 compute-0 systemd-logind[822]: Session 36 logged out. Waiting for processes to exit.
Nov 24 18:27:09 compute-0 systemd[1]: session-36.scope: Consumed 18.639s CPU time.
Nov 24 18:27:09 compute-0 systemd-logind[822]: Removed session 36.
Nov 24 18:27:09 compute-0 ceph-mon[74927]: pgmap v338: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:10 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v339: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:11 compute-0 ceph-mon[74927]: pgmap v339: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:27:12 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v340: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:13 compute-0 ceph-mon[74927]: pgmap v340: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:14 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v341: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:15 compute-0 ceph-mon[74927]: pgmap v341: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:16 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v342: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:27:17 compute-0 ceph-mon[74927]: pgmap v342: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:18 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v343: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:19 compute-0 ceph-mon[74927]: pgmap v343: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:20 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v344: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:21 compute-0 ceph-mon[74927]: pgmap v344: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:27:22 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v345: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:23 compute-0 ceph-mon[74927]: pgmap v345: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:24 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v346: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:25 compute-0 ceph-mon[74927]: pgmap v346: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:26 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v347: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:27:27 compute-0 ceph-mon[74927]: pgmap v347: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:28 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v348: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:29 compute-0 ceph-mon[74927]: pgmap v348: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:30 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v349: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:31 compute-0 ceph-mon[74927]: pgmap v349: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:32 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:27:32 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v350: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:33 compute-0 ceph-mon[74927]: pgmap v350: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:27:34
Nov 24 18:27:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:27:34 compute-0 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 18:27:34 compute-0 ceph-mgr[75218]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'images', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'backups', 'volumes']
Nov 24 18:27:34 compute-0 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 18:27:34 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v351: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:27:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:27:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:27:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:27:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:27:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:27:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:27:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:27:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:27:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:27:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:27:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:27:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:27:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:27:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:27:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:27:35 compute-0 ceph-mon[74927]: pgmap v351: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:36 compute-0 sudo[121882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:27:36 compute-0 sudo[121882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:27:36 compute-0 sudo[121882]: pam_unix(sudo:session): session closed for user root
Nov 24 18:27:36 compute-0 sudo[121907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:27:36 compute-0 sudo[121907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:27:36 compute-0 sudo[121907]: pam_unix(sudo:session): session closed for user root
Nov 24 18:27:36 compute-0 sudo[121932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:27:36 compute-0 sudo[121932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:27:36 compute-0 sudo[121932]: pam_unix(sudo:session): session closed for user root
Nov 24 18:27:36 compute-0 sudo[121957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 18:27:36 compute-0 sudo[121957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:27:36 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v352: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:36 compute-0 sudo[121957]: pam_unix(sudo:session): session closed for user root
Nov 24 18:27:36 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:27:36 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:27:36 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:27:36 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:27:36 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:27:36 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:27:36 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 41f3dbc4-e30e-4de5-91c0-953204ffc876 does not exist
Nov 24 18:27:36 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev b0b3eb31-d179-443a-9146-74c2c61a02af does not exist
Nov 24 18:27:36 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev fffc7eaf-46e8-4cc8-94e3-ec5ca5ea6a7a does not exist
Nov 24 18:27:36 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 18:27:36 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:27:36 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 18:27:36 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:27:36 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:27:36 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:27:37 compute-0 sudo[122012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:27:37 compute-0 sudo[122012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:27:37 compute-0 sudo[122012]: pam_unix(sudo:session): session closed for user root
Nov 24 18:27:37 compute-0 sudo[122037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:27:37 compute-0 sudo[122037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:27:37 compute-0 sudo[122037]: pam_unix(sudo:session): session closed for user root
Nov 24 18:27:37 compute-0 sudo[122062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:27:37 compute-0 sudo[122062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:27:37 compute-0 sudo[122062]: pam_unix(sudo:session): session closed for user root
Nov 24 18:27:37 compute-0 sudo[122087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 18:27:37 compute-0 sudo[122087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:27:37 compute-0 podman[122152]: 2025-11-24 18:27:37.495720437 +0000 UTC m=+0.036953129 container create 322b3c2e8053ce9bfec43da6c7c033572d384cb5164212706eb0a29621128a5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 18:27:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:27:37 compute-0 systemd[1]: Started libpod-conmon-322b3c2e8053ce9bfec43da6c7c033572d384cb5164212706eb0a29621128a5c.scope.
Nov 24 18:27:37 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:27:37 compute-0 podman[122152]: 2025-11-24 18:27:37.570636646 +0000 UTC m=+0.111869358 container init 322b3c2e8053ce9bfec43da6c7c033572d384cb5164212706eb0a29621128a5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_goodall, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 24 18:27:37 compute-0 podman[122152]: 2025-11-24 18:27:37.478702418 +0000 UTC m=+0.019935140 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:27:37 compute-0 podman[122152]: 2025-11-24 18:27:37.576928367 +0000 UTC m=+0.118161059 container start 322b3c2e8053ce9bfec43da6c7c033572d384cb5164212706eb0a29621128a5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_goodall, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 18:27:37 compute-0 podman[122152]: 2025-11-24 18:27:37.579924069 +0000 UTC m=+0.121156761 container attach 322b3c2e8053ce9bfec43da6c7c033572d384cb5164212706eb0a29621128a5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 24 18:27:37 compute-0 compassionate_goodall[122168]: 167 167
Nov 24 18:27:37 compute-0 systemd[1]: libpod-322b3c2e8053ce9bfec43da6c7c033572d384cb5164212706eb0a29621128a5c.scope: Deactivated successfully.
Nov 24 18:27:37 compute-0 podman[122152]: 2025-11-24 18:27:37.582394398 +0000 UTC m=+0.123627110 container died 322b3c2e8053ce9bfec43da6c7c033572d384cb5164212706eb0a29621128a5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_goodall, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 24 18:27:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-feaae527d37bb871598a7d5891c164da860c92253bc14309691f1d3e29c0032d-merged.mount: Deactivated successfully.
Nov 24 18:27:37 compute-0 podman[122152]: 2025-11-24 18:27:37.627060241 +0000 UTC m=+0.168292933 container remove 322b3c2e8053ce9bfec43da6c7c033572d384cb5164212706eb0a29621128a5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Nov 24 18:27:37 compute-0 systemd[1]: libpod-conmon-322b3c2e8053ce9bfec43da6c7c033572d384cb5164212706eb0a29621128a5c.scope: Deactivated successfully.
Nov 24 18:27:37 compute-0 ceph-mon[74927]: pgmap v352: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:37 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:27:37 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:27:37 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:27:37 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:27:37 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:27:37 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:27:37 compute-0 podman[122192]: 2025-11-24 18:27:37.775841714 +0000 UTC m=+0.034378797 container create 61378a05612e3ae26ced0b36323e65506d66e431f797a557c9cd3604c0d2a0f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bardeen, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:27:37 compute-0 systemd[1]: Started libpod-conmon-61378a05612e3ae26ced0b36323e65506d66e431f797a557c9cd3604c0d2a0f4.scope.
Nov 24 18:27:37 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:27:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9739673812d01ea11581e1947bfa6f138ceed6499d810b4f57927f47c3e0b6b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:27:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9739673812d01ea11581e1947bfa6f138ceed6499d810b4f57927f47c3e0b6b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:27:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9739673812d01ea11581e1947bfa6f138ceed6499d810b4f57927f47c3e0b6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:27:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9739673812d01ea11581e1947bfa6f138ceed6499d810b4f57927f47c3e0b6b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:27:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9739673812d01ea11581e1947bfa6f138ceed6499d810b4f57927f47c3e0b6b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:27:37 compute-0 podman[122192]: 2025-11-24 18:27:37.843070578 +0000 UTC m=+0.101607691 container init 61378a05612e3ae26ced0b36323e65506d66e431f797a557c9cd3604c0d2a0f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bardeen, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 24 18:27:37 compute-0 podman[122192]: 2025-11-24 18:27:37.853891748 +0000 UTC m=+0.112428881 container start 61378a05612e3ae26ced0b36323e65506d66e431f797a557c9cd3604c0d2a0f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bardeen, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 24 18:27:37 compute-0 podman[122192]: 2025-11-24 18:27:37.76108898 +0000 UTC m=+0.019626093 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:27:37 compute-0 podman[122192]: 2025-11-24 18:27:37.858365645 +0000 UTC m=+0.116902748 container attach 61378a05612e3ae26ced0b36323e65506d66e431f797a557c9cd3604c0d2a0f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bardeen, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:27:38 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v353: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:38 compute-0 lucid_bardeen[122209]: --> passed data devices: 0 physical, 3 LVM
Nov 24 18:27:38 compute-0 lucid_bardeen[122209]: --> relative data size: 1.0
Nov 24 18:27:38 compute-0 lucid_bardeen[122209]: --> All data devices are unavailable
Nov 24 18:27:38 compute-0 systemd[1]: libpod-61378a05612e3ae26ced0b36323e65506d66e431f797a557c9cd3604c0d2a0f4.scope: Deactivated successfully.
Nov 24 18:27:38 compute-0 podman[122192]: 2025-11-24 18:27:38.86511708 +0000 UTC m=+1.123654163 container died 61378a05612e3ae26ced0b36323e65506d66e431f797a557c9cd3604c0d2a0f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 18:27:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9739673812d01ea11581e1947bfa6f138ceed6499d810b4f57927f47c3e0b6b-merged.mount: Deactivated successfully.
Nov 24 18:27:38 compute-0 podman[122192]: 2025-11-24 18:27:38.907311034 +0000 UTC m=+1.165848117 container remove 61378a05612e3ae26ced0b36323e65506d66e431f797a557c9cd3604c0d2a0f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:27:38 compute-0 systemd[1]: libpod-conmon-61378a05612e3ae26ced0b36323e65506d66e431f797a557c9cd3604c0d2a0f4.scope: Deactivated successfully.
Nov 24 18:27:38 compute-0 sudo[122087]: pam_unix(sudo:session): session closed for user root
Nov 24 18:27:38 compute-0 sudo[122249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:27:38 compute-0 sudo[122249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:27:38 compute-0 sudo[122249]: pam_unix(sudo:session): session closed for user root
Nov 24 18:27:39 compute-0 sudo[122274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:27:39 compute-0 sudo[122274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:27:39 compute-0 sudo[122274]: pam_unix(sudo:session): session closed for user root
Nov 24 18:27:39 compute-0 sudo[122299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:27:39 compute-0 sudo[122299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:27:39 compute-0 sudo[122299]: pam_unix(sudo:session): session closed for user root
Nov 24 18:27:39 compute-0 sudo[122324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- lvm list --format json
Nov 24 18:27:39 compute-0 sudo[122324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:27:39 compute-0 podman[122389]: 2025-11-24 18:27:39.462491326 +0000 UTC m=+0.048339502 container create 82c4d478962ebfe6f55a8956f59b685a3aa88c0b622061cea1d0008e3ecea5f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:27:39 compute-0 systemd[1]: Started libpod-conmon-82c4d478962ebfe6f55a8956f59b685a3aa88c0b622061cea1d0008e3ecea5f0.scope.
Nov 24 18:27:39 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:27:39 compute-0 podman[122389]: 2025-11-24 18:27:39.447363592 +0000 UTC m=+0.033211778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:27:39 compute-0 podman[122389]: 2025-11-24 18:27:39.552424205 +0000 UTC m=+0.138272391 container init 82c4d478962ebfe6f55a8956f59b685a3aa88c0b622061cea1d0008e3ecea5f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 18:27:39 compute-0 podman[122389]: 2025-11-24 18:27:39.563350748 +0000 UTC m=+0.149198914 container start 82c4d478962ebfe6f55a8956f59b685a3aa88c0b622061cea1d0008e3ecea5f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mendeleev, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:27:39 compute-0 hopeful_mendeleev[122405]: 167 167
Nov 24 18:27:39 compute-0 podman[122389]: 2025-11-24 18:27:39.566513964 +0000 UTC m=+0.152362150 container attach 82c4d478962ebfe6f55a8956f59b685a3aa88c0b622061cea1d0008e3ecea5f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mendeleev, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 24 18:27:39 compute-0 systemd[1]: libpod-82c4d478962ebfe6f55a8956f59b685a3aa88c0b622061cea1d0008e3ecea5f0.scope: Deactivated successfully.
Nov 24 18:27:39 compute-0 podman[122410]: 2025-11-24 18:27:39.609639229 +0000 UTC m=+0.026901057 container died 82c4d478962ebfe6f55a8956f59b685a3aa88c0b622061cea1d0008e3ecea5f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:27:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-78f3406e890c5f019b5513483913f43ddfff92320aedfbbc3dd1190559d00967-merged.mount: Deactivated successfully.
Nov 24 18:27:39 compute-0 podman[122410]: 2025-11-24 18:27:39.641593527 +0000 UTC m=+0.058855355 container remove 82c4d478962ebfe6f55a8956f59b685a3aa88c0b622061cea1d0008e3ecea5f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:27:39 compute-0 systemd[1]: libpod-conmon-82c4d478962ebfe6f55a8956f59b685a3aa88c0b622061cea1d0008e3ecea5f0.scope: Deactivated successfully.
Nov 24 18:27:39 compute-0 ceph-mon[74927]: pgmap v353: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:39 compute-0 podman[122432]: 2025-11-24 18:27:39.801963168 +0000 UTC m=+0.040466583 container create ab1c46ea3202c9a1e9ba66794b9caa37e4c9d9a956945791b506e5195f2a11ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:27:39 compute-0 systemd[1]: Started libpod-conmon-ab1c46ea3202c9a1e9ba66794b9caa37e4c9d9a956945791b506e5195f2a11ba.scope.
Nov 24 18:27:39 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:27:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf8066b3cfea7b8144b2b2a0e041ba4a0cd968ea9c9ffee4736fb62ae198b37f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:27:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf8066b3cfea7b8144b2b2a0e041ba4a0cd968ea9c9ffee4736fb62ae198b37f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:27:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf8066b3cfea7b8144b2b2a0e041ba4a0cd968ea9c9ffee4736fb62ae198b37f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:27:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf8066b3cfea7b8144b2b2a0e041ba4a0cd968ea9c9ffee4736fb62ae198b37f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:27:39 compute-0 podman[122432]: 2025-11-24 18:27:39.865725349 +0000 UTC m=+0.104228754 container init ab1c46ea3202c9a1e9ba66794b9caa37e4c9d9a956945791b506e5195f2a11ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 18:27:39 compute-0 podman[122432]: 2025-11-24 18:27:39.874728655 +0000 UTC m=+0.113232050 container start ab1c46ea3202c9a1e9ba66794b9caa37e4c9d9a956945791b506e5195f2a11ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 24 18:27:39 compute-0 podman[122432]: 2025-11-24 18:27:39.877260546 +0000 UTC m=+0.115763941 container attach ab1c46ea3202c9a1e9ba66794b9caa37e4c9d9a956945791b506e5195f2a11ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:27:39 compute-0 podman[122432]: 2025-11-24 18:27:39.783258709 +0000 UTC m=+0.021762154 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:27:40 compute-0 strange_lalande[122448]: {
Nov 24 18:27:40 compute-0 strange_lalande[122448]:     "0": [
Nov 24 18:27:40 compute-0 strange_lalande[122448]:         {
Nov 24 18:27:40 compute-0 strange_lalande[122448]:             "devices": [
Nov 24 18:27:40 compute-0 strange_lalande[122448]:                 "/dev/loop3"
Nov 24 18:27:40 compute-0 strange_lalande[122448]:             ],
Nov 24 18:27:40 compute-0 strange_lalande[122448]:             "lv_name": "ceph_lv0",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:             "lv_size": "21470642176",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:             "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:             "name": "ceph_lv0",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:             "tags": {
Nov 24 18:27:40 compute-0 strange_lalande[122448]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:                 "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:                 "ceph.cluster_name": "ceph",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:                 "ceph.crush_device_class": "",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:                 "ceph.encrypted": "0",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:                 "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:                 "ceph.osd_id": "0",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:                 "ceph.type": "block",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:                 "ceph.vdo": "0"
Nov 24 18:27:40 compute-0 strange_lalande[122448]:             },
Nov 24 18:27:40 compute-0 strange_lalande[122448]:             "type": "block",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:             "vg_name": "ceph_vg0"
Nov 24 18:27:40 compute-0 strange_lalande[122448]:         }
Nov 24 18:27:40 compute-0 strange_lalande[122448]:     ],
Nov 24 18:27:40 compute-0 strange_lalande[122448]:     "1": [
Nov 24 18:27:40 compute-0 strange_lalande[122448]:         {
Nov 24 18:27:40 compute-0 strange_lalande[122448]:             "devices": [
Nov 24 18:27:40 compute-0 strange_lalande[122448]:                 "/dev/loop4"
Nov 24 18:27:40 compute-0 strange_lalande[122448]:             ],
Nov 24 18:27:40 compute-0 strange_lalande[122448]:             "lv_name": "ceph_lv1",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:             "lv_size": "21470642176",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:             "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:             "name": "ceph_lv1",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:             "tags": {
Nov 24 18:27:40 compute-0 strange_lalande[122448]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:                 "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:                 "ceph.cluster_name": "ceph",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:                 "ceph.crush_device_class": "",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:                 "ceph.encrypted": "0",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:                 "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:                 "ceph.osd_id": "1",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:                 "ceph.type": "block",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:                 "ceph.vdo": "0"
Nov 24 18:27:40 compute-0 strange_lalande[122448]:             },
Nov 24 18:27:40 compute-0 strange_lalande[122448]:             "type": "block",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:             "vg_name": "ceph_vg1"
Nov 24 18:27:40 compute-0 strange_lalande[122448]:         }
Nov 24 18:27:40 compute-0 strange_lalande[122448]:     ],
Nov 24 18:27:40 compute-0 strange_lalande[122448]:     "2": [
Nov 24 18:27:40 compute-0 strange_lalande[122448]:         {
Nov 24 18:27:40 compute-0 strange_lalande[122448]:             "devices": [
Nov 24 18:27:40 compute-0 strange_lalande[122448]:                 "/dev/loop5"
Nov 24 18:27:40 compute-0 strange_lalande[122448]:             ],
Nov 24 18:27:40 compute-0 strange_lalande[122448]:             "lv_name": "ceph_lv2",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:             "lv_size": "21470642176",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:             "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:             "name": "ceph_lv2",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:             "tags": {
Nov 24 18:27:40 compute-0 strange_lalande[122448]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:                 "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:                 "ceph.cluster_name": "ceph",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:                 "ceph.crush_device_class": "",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:                 "ceph.encrypted": "0",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:                 "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:                 "ceph.osd_id": "2",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:                 "ceph.type": "block",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:                 "ceph.vdo": "0"
Nov 24 18:27:40 compute-0 strange_lalande[122448]:             },
Nov 24 18:27:40 compute-0 strange_lalande[122448]:             "type": "block",
Nov 24 18:27:40 compute-0 strange_lalande[122448]:             "vg_name": "ceph_vg2"
Nov 24 18:27:40 compute-0 strange_lalande[122448]:         }
Nov 24 18:27:40 compute-0 strange_lalande[122448]:     ]
Nov 24 18:27:40 compute-0 strange_lalande[122448]: }
Nov 24 18:27:40 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v354: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:40 compute-0 systemd[1]: libpod-ab1c46ea3202c9a1e9ba66794b9caa37e4c9d9a956945791b506e5195f2a11ba.scope: Deactivated successfully.
Nov 24 18:27:40 compute-0 conmon[122448]: conmon ab1c46ea3202c9a1e9ba <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ab1c46ea3202c9a1e9ba66794b9caa37e4c9d9a956945791b506e5195f2a11ba.scope/container/memory.events
Nov 24 18:27:40 compute-0 podman[122457]: 2025-11-24 18:27:40.653755991 +0000 UTC m=+0.019620932 container died ab1c46ea3202c9a1e9ba66794b9caa37e4c9d9a956945791b506e5195f2a11ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_lalande, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 24 18:27:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf8066b3cfea7b8144b2b2a0e041ba4a0cd968ea9c9ffee4736fb62ae198b37f-merged.mount: Deactivated successfully.
Nov 24 18:27:40 compute-0 podman[122457]: 2025-11-24 18:27:40.703970577 +0000 UTC m=+0.069835518 container remove ab1c46ea3202c9a1e9ba66794b9caa37e4c9d9a956945791b506e5195f2a11ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_lalande, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:27:40 compute-0 systemd[1]: libpod-conmon-ab1c46ea3202c9a1e9ba66794b9caa37e4c9d9a956945791b506e5195f2a11ba.scope: Deactivated successfully.
Nov 24 18:27:40 compute-0 sudo[122324]: pam_unix(sudo:session): session closed for user root
Nov 24 18:27:40 compute-0 sudo[122473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:27:40 compute-0 sudo[122473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:27:40 compute-0 sudo[122473]: pam_unix(sudo:session): session closed for user root
Nov 24 18:27:40 compute-0 sudo[122498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:27:40 compute-0 sudo[122498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:27:40 compute-0 sudo[122498]: pam_unix(sudo:session): session closed for user root
Nov 24 18:27:40 compute-0 sudo[122523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:27:40 compute-0 sudo[122523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:27:40 compute-0 sudo[122523]: pam_unix(sudo:session): session closed for user root
Nov 24 18:27:40 compute-0 sudo[122548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- raw list --format json
Nov 24 18:27:40 compute-0 sudo[122548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:27:41 compute-0 podman[122612]: 2025-11-24 18:27:41.270808979 +0000 UTC m=+0.037467171 container create 2246bab8880687ad8dd6ca39cb2cc303ffe3fbf0635cd99e75f70b129bef42da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 24 18:27:41 compute-0 systemd[1]: Started libpod-conmon-2246bab8880687ad8dd6ca39cb2cc303ffe3fbf0635cd99e75f70b129bef42da.scope.
Nov 24 18:27:41 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:27:41 compute-0 podman[122612]: 2025-11-24 18:27:41.254746393 +0000 UTC m=+0.021404635 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:27:41 compute-0 podman[122612]: 2025-11-24 18:27:41.35953632 +0000 UTC m=+0.126194532 container init 2246bab8880687ad8dd6ca39cb2cc303ffe3fbf0635cd99e75f70b129bef42da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_rubin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:27:41 compute-0 podman[122612]: 2025-11-24 18:27:41.370414441 +0000 UTC m=+0.137072663 container start 2246bab8880687ad8dd6ca39cb2cc303ffe3fbf0635cd99e75f70b129bef42da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_rubin, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:27:41 compute-0 admiring_rubin[122628]: 167 167
Nov 24 18:27:41 compute-0 systemd[1]: libpod-2246bab8880687ad8dd6ca39cb2cc303ffe3fbf0635cd99e75f70b129bef42da.scope: Deactivated successfully.
Nov 24 18:27:41 compute-0 podman[122612]: 2025-11-24 18:27:41.374341425 +0000 UTC m=+0.140999627 container attach 2246bab8880687ad8dd6ca39cb2cc303ffe3fbf0635cd99e75f70b129bef42da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_rubin, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:27:41 compute-0 podman[122612]: 2025-11-24 18:27:41.375972624 +0000 UTC m=+0.142630826 container died 2246bab8880687ad8dd6ca39cb2cc303ffe3fbf0635cd99e75f70b129bef42da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_rubin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Nov 24 18:27:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7dcdf0c867c734e19396294560753826024801f34105026d9275ff80e462e40-merged.mount: Deactivated successfully.
Nov 24 18:27:41 compute-0 podman[122612]: 2025-11-24 18:27:41.41034252 +0000 UTC m=+0.177000722 container remove 2246bab8880687ad8dd6ca39cb2cc303ffe3fbf0635cd99e75f70b129bef42da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 24 18:27:41 compute-0 systemd[1]: libpod-conmon-2246bab8880687ad8dd6ca39cb2cc303ffe3fbf0635cd99e75f70b129bef42da.scope: Deactivated successfully.
Nov 24 18:27:41 compute-0 podman[122652]: 2025-11-24 18:27:41.556239903 +0000 UTC m=+0.035136084 container create 0dd2155a4ead81b2acd30a79be157dc921ca8d243786271b16e5de6d939fac9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kilby, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 24 18:27:41 compute-0 systemd[1]: Started libpod-conmon-0dd2155a4ead81b2acd30a79be157dc921ca8d243786271b16e5de6d939fac9d.scope.
Nov 24 18:27:41 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:27:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6db3010aeab685276e3a88d8f6afb3d42c807fe5627954ae0cbde02c58b5189/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:27:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6db3010aeab685276e3a88d8f6afb3d42c807fe5627954ae0cbde02c58b5189/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:27:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6db3010aeab685276e3a88d8f6afb3d42c807fe5627954ae0cbde02c58b5189/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:27:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6db3010aeab685276e3a88d8f6afb3d42c807fe5627954ae0cbde02c58b5189/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:27:41 compute-0 podman[122652]: 2025-11-24 18:27:41.540738071 +0000 UTC m=+0.019634282 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:27:41 compute-0 podman[122652]: 2025-11-24 18:27:41.644552324 +0000 UTC m=+0.123448515 container init 0dd2155a4ead81b2acd30a79be157dc921ca8d243786271b16e5de6d939fac9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Nov 24 18:27:41 compute-0 podman[122652]: 2025-11-24 18:27:41.649847341 +0000 UTC m=+0.128743522 container start 0dd2155a4ead81b2acd30a79be157dc921ca8d243786271b16e5de6d939fac9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:27:41 compute-0 podman[122652]: 2025-11-24 18:27:41.656275185 +0000 UTC m=+0.135171386 container attach 0dd2155a4ead81b2acd30a79be157dc921ca8d243786271b16e5de6d939fac9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:27:41 compute-0 ceph-mon[74927]: pgmap v354: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:27:42 compute-0 elegant_kilby[122669]: {
Nov 24 18:27:42 compute-0 elegant_kilby[122669]:     "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 18:27:42 compute-0 elegant_kilby[122669]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:27:42 compute-0 elegant_kilby[122669]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 18:27:42 compute-0 elegant_kilby[122669]:         "osd_id": 0,
Nov 24 18:27:42 compute-0 elegant_kilby[122669]:         "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:27:42 compute-0 elegant_kilby[122669]:         "type": "bluestore"
Nov 24 18:27:42 compute-0 elegant_kilby[122669]:     },
Nov 24 18:27:42 compute-0 elegant_kilby[122669]:     "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 18:27:42 compute-0 elegant_kilby[122669]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:27:42 compute-0 elegant_kilby[122669]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 18:27:42 compute-0 elegant_kilby[122669]:         "osd_id": 1,
Nov 24 18:27:42 compute-0 elegant_kilby[122669]:         "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:27:42 compute-0 elegant_kilby[122669]:         "type": "bluestore"
Nov 24 18:27:42 compute-0 elegant_kilby[122669]:     },
Nov 24 18:27:42 compute-0 elegant_kilby[122669]:     "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 18:27:42 compute-0 elegant_kilby[122669]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:27:42 compute-0 elegant_kilby[122669]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 18:27:42 compute-0 elegant_kilby[122669]:         "osd_id": 2,
Nov 24 18:27:42 compute-0 elegant_kilby[122669]:         "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:27:42 compute-0 elegant_kilby[122669]:         "type": "bluestore"
Nov 24 18:27:42 compute-0 elegant_kilby[122669]:     }
Nov 24 18:27:42 compute-0 elegant_kilby[122669]: }
Nov 24 18:27:42 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v355: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:42 compute-0 systemd[1]: libpod-0dd2155a4ead81b2acd30a79be157dc921ca8d243786271b16e5de6d939fac9d.scope: Deactivated successfully.
Nov 24 18:27:42 compute-0 conmon[122669]: conmon 0dd2155a4ead81b2acd3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0dd2155a4ead81b2acd30a79be157dc921ca8d243786271b16e5de6d939fac9d.scope/container/memory.events
Nov 24 18:27:42 compute-0 podman[122652]: 2025-11-24 18:27:42.61841343 +0000 UTC m=+1.097309621 container died 0dd2155a4ead81b2acd30a79be157dc921ca8d243786271b16e5de6d939fac9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kilby, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:27:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6db3010aeab685276e3a88d8f6afb3d42c807fe5627954ae0cbde02c58b5189-merged.mount: Deactivated successfully.
Nov 24 18:27:42 compute-0 podman[122652]: 2025-11-24 18:27:42.679274381 +0000 UTC m=+1.158170572 container remove 0dd2155a4ead81b2acd30a79be157dc921ca8d243786271b16e5de6d939fac9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kilby, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:27:42 compute-0 systemd[1]: libpod-conmon-0dd2155a4ead81b2acd30a79be157dc921ca8d243786271b16e5de6d939fac9d.scope: Deactivated successfully.
Nov 24 18:27:42 compute-0 sudo[122548]: pam_unix(sudo:session): session closed for user root
Nov 24 18:27:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:27:42 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:27:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:27:42 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:27:42 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 541645d5-e97a-4098-b39d-f457a8aaf1c1 does not exist
Nov 24 18:27:42 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev bdf4c7a3-b353-47e6-a815-2ace31dbd4b4 does not exist
Nov 24 18:27:42 compute-0 sudo[122716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:27:42 compute-0 sudo[122716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:27:42 compute-0 sudo[122716]: pam_unix(sudo:session): session closed for user root
Nov 24 18:27:42 compute-0 sudo[122741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:27:42 compute-0 sudo[122741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:27:42 compute-0 sudo[122741]: pam_unix(sudo:session): session closed for user root
Nov 24 18:27:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:27:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:27:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:27:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:27:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:27:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:27:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:27:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:27:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:27:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:27:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:27:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:27:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 18:27:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:27:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:27:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:27:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 18:27:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:27:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 18:27:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:27:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:27:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:27:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 18:27:43 compute-0 ceph-mon[74927]: pgmap v355: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:43 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:27:43 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:27:44 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v356: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:45 compute-0 sshd-session[71202]: Received disconnect from 38.102.83.41 port 49908:11: disconnected by user
Nov 24 18:27:45 compute-0 sshd-session[71202]: Disconnected from user zuul 38.102.83.41 port 49908
Nov 24 18:27:45 compute-0 sshd-session[71199]: pam_unix(sshd:session): session closed for user zuul
Nov 24 18:27:45 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Nov 24 18:27:45 compute-0 systemd[1]: session-17.scope: Consumed 1min 25.451s CPU time.
Nov 24 18:27:45 compute-0 systemd-logind[822]: Session 17 logged out. Waiting for processes to exit.
Nov 24 18:27:45 compute-0 systemd-logind[822]: Removed session 17.
Nov 24 18:27:45 compute-0 ceph-mon[74927]: pgmap v356: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:46 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v357: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:27:47 compute-0 ceph-mon[74927]: pgmap v357: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:48 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v358: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:49 compute-0 ceph-mon[74927]: pgmap v358: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:50 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v359: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:51 compute-0 ceph-mon[74927]: pgmap v359: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:27:52 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v360: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:53 compute-0 ceph-mon[74927]: pgmap v360: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:54 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v361: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:55 compute-0 ceph-mon[74927]: pgmap v361: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:56 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v362: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:27:57 compute-0 ceph-mon[74927]: pgmap v362: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:58 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v363: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:27:59 compute-0 ceph-mon[74927]: pgmap v363: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:00 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v364: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:01 compute-0 ceph-mon[74927]: pgmap v364: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:02 compute-0 sshd-session[122767]: Accepted publickey for zuul from 192.168.122.30 port 33920 ssh2: ECDSA SHA256:jrbRhTO0vQ5011EeK1ZbrK2vK+fcQyIDL9kUBqDHBYY
Nov 24 18:28:02 compute-0 systemd-logind[822]: New session 37 of user zuul.
Nov 24 18:28:02 compute-0 systemd[1]: Started Session 37 of User zuul.
Nov 24 18:28:02 compute-0 sshd-session[122767]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 18:28:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:28:02 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v365: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:02 compute-0 python3.9[122920]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 24 18:28:03 compute-0 ceph-mon[74927]: pgmap v365: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:04 compute-0 python3.9[123094]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:28:04 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v366: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:28:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:28:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:28:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:28:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:28:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:28:04 compute-0 sudo[123248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udniwjdffbzehxrolmplneefeyebwzmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008884.5321028-45-180810794246637/AnsiballZ_command.py'
Nov 24 18:28:04 compute-0 sudo[123248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:28:05 compute-0 python3.9[123250]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:28:05 compute-0 sudo[123248]: pam_unix(sudo:session): session closed for user root
Nov 24 18:28:05 compute-0 ceph-mon[74927]: pgmap v366: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:05 compute-0 sudo[123401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tehcqbwcodrjnynuddfaebuxyeerckkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008885.4441438-57-281109304396283/AnsiballZ_stat.py'
Nov 24 18:28:05 compute-0 sudo[123401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:28:06 compute-0 python3.9[123403]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:28:06 compute-0 sudo[123401]: pam_unix(sudo:session): session closed for user root
Nov 24 18:28:06 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v367: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:06 compute-0 sudo[123555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xveptgltwrzjubiielrxyxzwycrjwbmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008886.3838909-68-229524292266418/AnsiballZ_file.py'
Nov 24 18:28:06 compute-0 sudo[123555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:28:07 compute-0 python3.9[123557]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:28:07 compute-0 sudo[123555]: pam_unix(sudo:session): session closed for user root
Nov 24 18:28:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:28:07 compute-0 sudo[123707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmzbyunbgphsbsavigyiuxgedpsmdfyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008887.2953155-77-90237988142233/AnsiballZ_file.py'
Nov 24 18:28:07 compute-0 sudo[123707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:28:07 compute-0 python3.9[123709]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:28:07 compute-0 ceph-mon[74927]: pgmap v367: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:07 compute-0 sudo[123707]: pam_unix(sudo:session): session closed for user root
Nov 24 18:28:08 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v368: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:08 compute-0 python3.9[123859]: ansible-ansible.builtin.service_facts Invoked
Nov 24 18:28:08 compute-0 network[123876]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 18:28:08 compute-0 network[123877]: 'network-scripts' will be removed from distribution in near future.
Nov 24 18:28:08 compute-0 network[123878]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 18:28:09 compute-0 ceph-mon[74927]: pgmap v368: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:10 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v369: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:11 compute-0 ceph-mon[74927]: pgmap v369: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:12 compute-0 python3.9[124138]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:28:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:28:12 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v370: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:13 compute-0 python3.9[124288]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:28:13 compute-0 ceph-mon[74927]: pgmap v370: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:14 compute-0 python3.9[124442]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:28:14 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v371: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:15 compute-0 sudo[124598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovoubilkypbkvbsqsvndduwuufraopfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008894.6694617-125-19925571900287/AnsiballZ_setup.py'
Nov 24 18:28:15 compute-0 sudo[124598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:28:15 compute-0 python3.9[124600]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 18:28:15 compute-0 sudo[124598]: pam_unix(sudo:session): session closed for user root
Nov 24 18:28:15 compute-0 ceph-mon[74927]: pgmap v371: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:15 compute-0 sudo[124682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aidrmbgalkioeemxwddcnggubbhglgnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008894.6694617-125-19925571900287/AnsiballZ_dnf.py'
Nov 24 18:28:15 compute-0 sudo[124682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:28:16 compute-0 python3.9[124684]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 18:28:16 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v372: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:28:17 compute-0 ceph-mon[74927]: pgmap v372: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:18 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v373: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:19 compute-0 ceph-mon[74927]: pgmap v373: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:20 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v374: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:21 compute-0 ceph-mon[74927]: pgmap v374: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:28:22 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v375: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:23 compute-0 ceph-mon[74927]: pgmap v375: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:24 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v376: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:25 compute-0 ceph-mon[74927]: pgmap v376: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:26 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v377: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:28:27 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Nov 24 18:28:27 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:28:27.602345) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 18:28:27 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Nov 24 18:28:27 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008907602381, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1676, "num_deletes": 252, "total_data_size": 2419442, "memory_usage": 2450976, "flush_reason": "Manual Compaction"}
Nov 24 18:28:27 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Nov 24 18:28:27 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008907729554, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1411355, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7347, "largest_seqno": 9022, "table_properties": {"data_size": 1405773, "index_size": 2530, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16096, "raw_average_key_size": 20, "raw_value_size": 1392659, "raw_average_value_size": 1799, "num_data_blocks": 119, "num_entries": 774, "num_filter_entries": 774, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008749, "oldest_key_time": 1764008749, "file_creation_time": 1764008907, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Nov 24 18:28:27 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 127268 microseconds, and 5657 cpu microseconds.
Nov 24 18:28:27 compute-0 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 18:28:27 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:28:27.729609) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1411355 bytes OK
Nov 24 18:28:27 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:28:27.729630) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Nov 24 18:28:27 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:28:27.731963) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Nov 24 18:28:27 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:28:27.731982) EVENT_LOG_v1 {"time_micros": 1764008907731974, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 18:28:27 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:28:27.732003) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 18:28:27 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2411970, prev total WAL file size 2411970, number of live WAL files 2.
Nov 24 18:28:27 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:28:27 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:28:27.733055) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323533' seq:0, type:0; will stop at (end)
Nov 24 18:28:27 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 18:28:27 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1378KB)], [20(6986KB)]
Nov 24 18:28:27 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008907733147, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 8565946, "oldest_snapshot_seqno": -1}
Nov 24 18:28:27 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3399 keys, 6787027 bytes, temperature: kUnknown
Nov 24 18:28:27 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008907781056, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 6787027, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6761225, "index_size": 16221, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8517, "raw_key_size": 81388, "raw_average_key_size": 23, "raw_value_size": 6696679, "raw_average_value_size": 1970, "num_data_blocks": 719, "num_entries": 3399, "num_filter_entries": 3399, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008324, "oldest_key_time": 0, "file_creation_time": 1764008907, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Nov 24 18:28:27 compute-0 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 18:28:27 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:28:27.781313) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 6787027 bytes
Nov 24 18:28:27 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:28:27.782602) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 178.6 rd, 141.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 6.8 +0.0 blob) out(6.5 +0.0 blob), read-write-amplify(10.9) write-amplify(4.8) OK, records in: 3841, records dropped: 442 output_compression: NoCompression
Nov 24 18:28:27 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:28:27.782619) EVENT_LOG_v1 {"time_micros": 1764008907782611, "job": 6, "event": "compaction_finished", "compaction_time_micros": 47966, "compaction_time_cpu_micros": 15177, "output_level": 6, "num_output_files": 1, "total_output_size": 6787027, "num_input_records": 3841, "num_output_records": 3399, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 18:28:27 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:28:27 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008907783022, "job": 6, "event": "table_file_deletion", "file_number": 22}
Nov 24 18:28:27 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:28:27 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008907784300, "job": 6, "event": "table_file_deletion", "file_number": 20}
Nov 24 18:28:27 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:28:27.732889) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:28:27 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:28:27.784345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:28:27 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:28:27.784349) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:28:27 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:28:27.784350) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:28:27 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:28:27.784351) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:28:27 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:28:27.784353) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:28:27 compute-0 ceph-mon[74927]: pgmap v377: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:28 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v378: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:29 compute-0 ceph-mon[74927]: pgmap v378: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:30 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v379: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:31 compute-0 ceph-mon[74927]: pgmap v379: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:32 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:28:32 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v380: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:34 compute-0 ceph-mon[74927]: pgmap v380: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:28:34
Nov 24 18:28:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:28:34 compute-0 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 18:28:34 compute-0 ceph-mgr[75218]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'images', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'backups']
Nov 24 18:28:34 compute-0 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 18:28:34 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v381: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:28:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:28:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:28:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:28:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:28:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:28:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:28:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:28:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:28:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:28:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:28:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:28:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:28:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:28:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:28:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:28:36 compute-0 ceph-mon[74927]: pgmap v381: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:36 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v382: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:28:38 compute-0 ceph-mon[74927]: pgmap v382: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:38 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v383: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:40 compute-0 ceph-mon[74927]: pgmap v383: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:40 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v384: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:42 compute-0 ceph-mon[74927]: pgmap v384: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:28:42 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v385: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:42 compute-0 sudo[124834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:28:42 compute-0 sudo[124834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:28:42 compute-0 sudo[124834]: pam_unix(sudo:session): session closed for user root
Nov 24 18:28:42 compute-0 sudo[124859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:28:42 compute-0 sudo[124859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:28:42 compute-0 sudo[124859]: pam_unix(sudo:session): session closed for user root
Nov 24 18:28:43 compute-0 sudo[124884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:28:43 compute-0 sudo[124884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:28:43 compute-0 sudo[124884]: pam_unix(sudo:session): session closed for user root
Nov 24 18:28:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:28:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:28:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:28:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:28:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:28:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:28:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:28:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:28:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:28:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:28:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:28:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:28:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 18:28:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:28:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:28:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:28:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 18:28:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:28:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 18:28:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:28:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:28:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:28:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 18:28:43 compute-0 sudo[124910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 18:28:43 compute-0 sudo[124910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:28:43 compute-0 sudo[124910]: pam_unix(sudo:session): session closed for user root
Nov 24 18:28:43 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:28:43 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:28:43 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:28:43 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:28:43 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:28:43 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:28:43 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev b8a543d7-4190-494c-a197-0c6f1289c639 does not exist
Nov 24 18:28:43 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 23ea1bba-f6d9-4fff-9bfb-5b1af5764a1f does not exist
Nov 24 18:28:43 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev d7430aad-7aa0-4e7d-8c19-35685ff806d3 does not exist
Nov 24 18:28:43 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 18:28:43 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:28:43 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 18:28:43 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:28:43 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:28:43 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:28:43 compute-0 sudo[124972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:28:43 compute-0 sudo[124972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:28:43 compute-0 sudo[124972]: pam_unix(sudo:session): session closed for user root
Nov 24 18:28:43 compute-0 sudo[124997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:28:43 compute-0 sudo[124997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:28:43 compute-0 sudo[124997]: pam_unix(sudo:session): session closed for user root
Nov 24 18:28:43 compute-0 sudo[125022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:28:43 compute-0 sudo[125022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:28:43 compute-0 sudo[125022]: pam_unix(sudo:session): session closed for user root
Nov 24 18:28:43 compute-0 sudo[125047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 18:28:43 compute-0 sudo[125047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:28:43 compute-0 podman[125111]: 2025-11-24 18:28:43.959195069 +0000 UTC m=+0.050459241 container create 638e8faf7f42b84ae969ac04a7d419cd3acada495bdf86017306b26a33ffa41b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mendel, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:28:43 compute-0 systemd[1]: Started libpod-conmon-638e8faf7f42b84ae969ac04a7d419cd3acada495bdf86017306b26a33ffa41b.scope.
Nov 24 18:28:44 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:28:44 compute-0 podman[125111]: 2025-11-24 18:28:43.929391515 +0000 UTC m=+0.020655727 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:28:44 compute-0 podman[125111]: 2025-11-24 18:28:44.088608029 +0000 UTC m=+0.179872201 container init 638e8faf7f42b84ae969ac04a7d419cd3acada495bdf86017306b26a33ffa41b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mendel, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 24 18:28:44 compute-0 podman[125111]: 2025-11-24 18:28:44.095482701 +0000 UTC m=+0.186746853 container start 638e8faf7f42b84ae969ac04a7d419cd3acada495bdf86017306b26a33ffa41b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mendel, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 18:28:44 compute-0 elegant_mendel[125128]: 167 167
Nov 24 18:28:44 compute-0 systemd[1]: libpod-638e8faf7f42b84ae969ac04a7d419cd3acada495bdf86017306b26a33ffa41b.scope: Deactivated successfully.
Nov 24 18:28:44 compute-0 ceph-mon[74927]: pgmap v385: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:44 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:28:44 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:28:44 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:28:44 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:28:44 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:28:44 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:28:44 compute-0 podman[125111]: 2025-11-24 18:28:44.127706926 +0000 UTC m=+0.218971098 container attach 638e8faf7f42b84ae969ac04a7d419cd3acada495bdf86017306b26a33ffa41b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mendel, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:28:44 compute-0 podman[125111]: 2025-11-24 18:28:44.128036704 +0000 UTC m=+0.219300856 container died 638e8faf7f42b84ae969ac04a7d419cd3acada495bdf86017306b26a33ffa41b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:28:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0d4ee0497fdbf92cf49d689120abaa9c60703fd0b7ecde065a3c8f577fa9bf3-merged.mount: Deactivated successfully.
Nov 24 18:28:44 compute-0 podman[125111]: 2025-11-24 18:28:44.165808387 +0000 UTC m=+0.257072559 container remove 638e8faf7f42b84ae969ac04a7d419cd3acada495bdf86017306b26a33ffa41b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mendel, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:28:44 compute-0 systemd[1]: libpod-conmon-638e8faf7f42b84ae969ac04a7d419cd3acada495bdf86017306b26a33ffa41b.scope: Deactivated successfully.
Nov 24 18:28:44 compute-0 podman[125153]: 2025-11-24 18:28:44.317372001 +0000 UTC m=+0.040429851 container create bda48b0575bb25f07a94fc0ce70ab5411973e9a18afe63ff30c9c9d933a9c49a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_darwin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 18:28:44 compute-0 systemd[1]: Started libpod-conmon-bda48b0575bb25f07a94fc0ce70ab5411973e9a18afe63ff30c9c9d933a9c49a.scope.
Nov 24 18:28:44 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:28:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952b05c3f5c4c441a5edee1e8c327083500af48fb5b6b95349a7a0ad4db122a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:28:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952b05c3f5c4c441a5edee1e8c327083500af48fb5b6b95349a7a0ad4db122a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:28:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952b05c3f5c4c441a5edee1e8c327083500af48fb5b6b95349a7a0ad4db122a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:28:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952b05c3f5c4c441a5edee1e8c327083500af48fb5b6b95349a7a0ad4db122a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:28:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952b05c3f5c4c441a5edee1e8c327083500af48fb5b6b95349a7a0ad4db122a3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:28:44 compute-0 podman[125153]: 2025-11-24 18:28:44.299147296 +0000 UTC m=+0.022205166 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:28:44 compute-0 podman[125153]: 2025-11-24 18:28:44.400876665 +0000 UTC m=+0.123934525 container init bda48b0575bb25f07a94fc0ce70ab5411973e9a18afe63ff30c9c9d933a9c49a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 24 18:28:44 compute-0 podman[125153]: 2025-11-24 18:28:44.408415793 +0000 UTC m=+0.131473623 container start bda48b0575bb25f07a94fc0ce70ab5411973e9a18afe63ff30c9c9d933a9c49a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_darwin, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 24 18:28:44 compute-0 podman[125153]: 2025-11-24 18:28:44.411535291 +0000 UTC m=+0.134593141 container attach bda48b0575bb25f07a94fc0ce70ab5411973e9a18afe63ff30c9c9d933a9c49a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:28:44 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v386: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:45 compute-0 mystifying_darwin[125170]: --> passed data devices: 0 physical, 3 LVM
Nov 24 18:28:45 compute-0 mystifying_darwin[125170]: --> relative data size: 1.0
Nov 24 18:28:45 compute-0 mystifying_darwin[125170]: --> All data devices are unavailable
Nov 24 18:28:45 compute-0 systemd[1]: libpod-bda48b0575bb25f07a94fc0ce70ab5411973e9a18afe63ff30c9c9d933a9c49a.scope: Deactivated successfully.
Nov 24 18:28:45 compute-0 podman[125153]: 2025-11-24 18:28:45.33921652 +0000 UTC m=+1.062274360 container died bda48b0575bb25f07a94fc0ce70ab5411973e9a18afe63ff30c9c9d933a9c49a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Nov 24 18:28:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-952b05c3f5c4c441a5edee1e8c327083500af48fb5b6b95349a7a0ad4db122a3-merged.mount: Deactivated successfully.
Nov 24 18:28:45 compute-0 podman[125153]: 2025-11-24 18:28:45.386780498 +0000 UTC m=+1.109838328 container remove bda48b0575bb25f07a94fc0ce70ab5411973e9a18afe63ff30c9c9d933a9c49a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_darwin, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 24 18:28:45 compute-0 systemd[1]: libpod-conmon-bda48b0575bb25f07a94fc0ce70ab5411973e9a18afe63ff30c9c9d933a9c49a.scope: Deactivated successfully.
Nov 24 18:28:45 compute-0 sudo[125047]: pam_unix(sudo:session): session closed for user root
Nov 24 18:28:45 compute-0 sudo[125216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:28:45 compute-0 sudo[125216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:28:45 compute-0 sudo[125216]: pam_unix(sudo:session): session closed for user root
Nov 24 18:28:45 compute-0 sudo[125241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:28:45 compute-0 sudo[125241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:28:45 compute-0 sudo[125241]: pam_unix(sudo:session): session closed for user root
Nov 24 18:28:45 compute-0 sudo[125266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:28:45 compute-0 sudo[125266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:28:45 compute-0 sudo[125266]: pam_unix(sudo:session): session closed for user root
Nov 24 18:28:45 compute-0 sudo[125291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- lvm list --format json
Nov 24 18:28:45 compute-0 sudo[125291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:28:45 compute-0 podman[125355]: 2025-11-24 18:28:45.924306577 +0000 UTC m=+0.035008605 container create e99a855e84265a574f5490171479dfa507144a8ccbbf6c6b02c37ca5893b5b18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_noyce, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:28:45 compute-0 systemd[1]: Started libpod-conmon-e99a855e84265a574f5490171479dfa507144a8ccbbf6c6b02c37ca5893b5b18.scope.
Nov 24 18:28:45 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:28:46 compute-0 podman[125355]: 2025-11-24 18:28:45.909303393 +0000 UTC m=+0.020005431 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:28:46 compute-0 podman[125355]: 2025-11-24 18:28:46.007139015 +0000 UTC m=+0.117841043 container init e99a855e84265a574f5490171479dfa507144a8ccbbf6c6b02c37ca5893b5b18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:28:46 compute-0 podman[125355]: 2025-11-24 18:28:46.013445043 +0000 UTC m=+0.124147061 container start e99a855e84265a574f5490171479dfa507144a8ccbbf6c6b02c37ca5893b5b18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_noyce, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:28:46 compute-0 podman[125355]: 2025-11-24 18:28:46.016697744 +0000 UTC m=+0.127399792 container attach e99a855e84265a574f5490171479dfa507144a8ccbbf6c6b02c37ca5893b5b18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_noyce, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 18:28:46 compute-0 interesting_noyce[125372]: 167 167
Nov 24 18:28:46 compute-0 systemd[1]: libpod-e99a855e84265a574f5490171479dfa507144a8ccbbf6c6b02c37ca5893b5b18.scope: Deactivated successfully.
Nov 24 18:28:46 compute-0 podman[125355]: 2025-11-24 18:28:46.018505199 +0000 UTC m=+0.129207217 container died e99a855e84265a574f5490171479dfa507144a8ccbbf6c6b02c37ca5893b5b18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:28:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-e74ba456fcde87c73223e67ca09af9984afa47e0b201f16aead54ec57679bc7e-merged.mount: Deactivated successfully.
Nov 24 18:28:46 compute-0 podman[125355]: 2025-11-24 18:28:46.060173649 +0000 UTC m=+0.170875667 container remove e99a855e84265a574f5490171479dfa507144a8ccbbf6c6b02c37ca5893b5b18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 24 18:28:46 compute-0 systemd[1]: libpod-conmon-e99a855e84265a574f5490171479dfa507144a8ccbbf6c6b02c37ca5893b5b18.scope: Deactivated successfully.
Nov 24 18:28:46 compute-0 ceph-mon[74927]: pgmap v386: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:46 compute-0 podman[125397]: 2025-11-24 18:28:46.219005224 +0000 UTC m=+0.035438725 container create 8cd1b308f4de35451a2cd6dabaf753daf2fc9819d0e05df3c315bd3ef7c98c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hodgkin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 24 18:28:46 compute-0 systemd[1]: Started libpod-conmon-8cd1b308f4de35451a2cd6dabaf753daf2fc9819d0e05df3c315bd3ef7c98c6c.scope.
Nov 24 18:28:46 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:28:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c785c51b2dfba7dd72a8474db7422968d4785450789d1338ad203595153e72b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:28:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c785c51b2dfba7dd72a8474db7422968d4785450789d1338ad203595153e72b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:28:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c785c51b2dfba7dd72a8474db7422968d4785450789d1338ad203595153e72b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:28:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c785c51b2dfba7dd72a8474db7422968d4785450789d1338ad203595153e72b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:28:46 compute-0 podman[125397]: 2025-11-24 18:28:46.29695093 +0000 UTC m=+0.113384451 container init 8cd1b308f4de35451a2cd6dabaf753daf2fc9819d0e05df3c315bd3ef7c98c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:28:46 compute-0 podman[125397]: 2025-11-24 18:28:46.206168694 +0000 UTC m=+0.022602215 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:28:46 compute-0 podman[125397]: 2025-11-24 18:28:46.304681343 +0000 UTC m=+0.121114844 container start 8cd1b308f4de35451a2cd6dabaf753daf2fc9819d0e05df3c315bd3ef7c98c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hodgkin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Nov 24 18:28:46 compute-0 podman[125397]: 2025-11-24 18:28:46.307718829 +0000 UTC m=+0.124152330 container attach 8cd1b308f4de35451a2cd6dabaf753daf2fc9819d0e05df3c315bd3ef7c98c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hodgkin, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 18:28:46 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v387: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]: {
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:     "0": [
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:         {
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:             "devices": [
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:                 "/dev/loop3"
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:             ],
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:             "lv_name": "ceph_lv0",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:             "lv_size": "21470642176",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:             "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:             "name": "ceph_lv0",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:             "tags": {
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:                 "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:                 "ceph.cluster_name": "ceph",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:                 "ceph.crush_device_class": "",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:                 "ceph.encrypted": "0",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:                 "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:                 "ceph.osd_id": "0",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:                 "ceph.type": "block",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:                 "ceph.vdo": "0"
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:             },
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:             "type": "block",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:             "vg_name": "ceph_vg0"
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:         }
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:     ],
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:     "1": [
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:         {
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:             "devices": [
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:                 "/dev/loop4"
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:             ],
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:             "lv_name": "ceph_lv1",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:             "lv_size": "21470642176",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:             "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:             "name": "ceph_lv1",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:             "tags": {
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:                 "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:                 "ceph.cluster_name": "ceph",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:                 "ceph.crush_device_class": "",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:                 "ceph.encrypted": "0",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:                 "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:                 "ceph.osd_id": "1",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:                 "ceph.type": "block",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:                 "ceph.vdo": "0"
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:             },
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:             "type": "block",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:             "vg_name": "ceph_vg1"
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:         }
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:     ],
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:     "2": [
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:         {
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:             "devices": [
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:                 "/dev/loop5"
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:             ],
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:             "lv_name": "ceph_lv2",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:             "lv_size": "21470642176",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:             "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:             "name": "ceph_lv2",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:             "tags": {
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:                 "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:                 "ceph.cluster_name": "ceph",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:                 "ceph.crush_device_class": "",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:                 "ceph.encrypted": "0",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:                 "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:                 "ceph.osd_id": "2",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:                 "ceph.type": "block",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:                 "ceph.vdo": "0"
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:             },
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:             "type": "block",
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:             "vg_name": "ceph_vg2"
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:         }
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]:     ]
Nov 24 18:28:47 compute-0 friendly_hodgkin[125414]: }
Nov 24 18:28:47 compute-0 systemd[1]: libpod-8cd1b308f4de35451a2cd6dabaf753daf2fc9819d0e05df3c315bd3ef7c98c6c.scope: Deactivated successfully.
Nov 24 18:28:47 compute-0 podman[125397]: 2025-11-24 18:28:47.083478366 +0000 UTC m=+0.899911867 container died 8cd1b308f4de35451a2cd6dabaf753daf2fc9819d0e05df3c315bd3ef7c98c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hodgkin, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:28:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c785c51b2dfba7dd72a8474db7422968d4785450789d1338ad203595153e72b-merged.mount: Deactivated successfully.
Nov 24 18:28:47 compute-0 podman[125397]: 2025-11-24 18:28:47.159381401 +0000 UTC m=+0.975814932 container remove 8cd1b308f4de35451a2cd6dabaf753daf2fc9819d0e05df3c315bd3ef7c98c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 18:28:47 compute-0 systemd[1]: libpod-conmon-8cd1b308f4de35451a2cd6dabaf753daf2fc9819d0e05df3c315bd3ef7c98c6c.scope: Deactivated successfully.
Nov 24 18:28:47 compute-0 sudo[125291]: pam_unix(sudo:session): session closed for user root
Nov 24 18:28:47 compute-0 sudo[125436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:28:47 compute-0 sudo[125436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:28:47 compute-0 sudo[125436]: pam_unix(sudo:session): session closed for user root
Nov 24 18:28:47 compute-0 sudo[125461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:28:47 compute-0 sudo[125461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:28:47 compute-0 sudo[125461]: pam_unix(sudo:session): session closed for user root
Nov 24 18:28:47 compute-0 sudo[125486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:28:47 compute-0 sudo[125486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:28:47 compute-0 sudo[125486]: pam_unix(sudo:session): session closed for user root
Nov 24 18:28:47 compute-0 sudo[125511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- raw list --format json
Nov 24 18:28:47 compute-0 sudo[125511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:28:47 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 18:28:47 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2031 writes, 9048 keys, 2031 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s
                                           Cumulative WAL: 2031 writes, 2031 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2031 writes, 9048 keys, 2031 commit groups, 1.0 writes per commit group, ingest: 11.00 MB, 0.02 MB/s
                                           Interval WAL: 2031 writes, 2031 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     50.7      0.16              0.02         3    0.054       0      0       0.0       0.0
                                             L6      1/0    6.47 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    177.1    156.6      0.08              0.04         2    0.042    7197    731       0.0       0.0
                                            Sum      1/0    6.47 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     60.9     87.1      0.25              0.06         5    0.049    7197    731       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     61.4     87.6      0.25              0.06         4    0.061    7197    731       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    177.1    156.6      0.08              0.04         2    0.042    7197    731       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     51.0      0.16              0.02         2    0.080       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     28.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.008, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.01 GB read, 0.03 MB/s read, 0.2 seconds
                                           Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.01 GB read, 0.03 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562af0cfd1f0#2 capacity: 308.00 MB usage: 590.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(36,502.97 KB,0.159474%) FilterBlock(6,28.55 KB,0.00905124%) IndexBlock(6,59.16 KB,0.0187564%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 24 18:28:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:28:47 compute-0 podman[125576]: 2025-11-24 18:28:47.701463784 +0000 UTC m=+0.046641205 container create 655765b3cbafd88ea77b5bcd85327dd53d1886c908ee1d8d02ccf29c6124e107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hermann, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 18:28:47 compute-0 systemd[1]: Started libpod-conmon-655765b3cbafd88ea77b5bcd85327dd53d1886c908ee1d8d02ccf29c6124e107.scope.
Nov 24 18:28:47 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:28:47 compute-0 podman[125576]: 2025-11-24 18:28:47.681534807 +0000 UTC m=+0.026712228 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:28:47 compute-0 podman[125576]: 2025-11-24 18:28:47.862117055 +0000 UTC m=+0.207294496 container init 655765b3cbafd88ea77b5bcd85327dd53d1886c908ee1d8d02ccf29c6124e107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hermann, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 18:28:47 compute-0 podman[125576]: 2025-11-24 18:28:47.868510335 +0000 UTC m=+0.213687756 container start 655765b3cbafd88ea77b5bcd85327dd53d1886c908ee1d8d02ccf29c6124e107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:28:47 compute-0 podman[125576]: 2025-11-24 18:28:47.872128215 +0000 UTC m=+0.217305656 container attach 655765b3cbafd88ea77b5bcd85327dd53d1886c908ee1d8d02ccf29c6124e107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:28:47 compute-0 intelligent_hermann[125591]: 167 167
Nov 24 18:28:47 compute-0 systemd[1]: libpod-655765b3cbafd88ea77b5bcd85327dd53d1886c908ee1d8d02ccf29c6124e107.scope: Deactivated successfully.
Nov 24 18:28:47 compute-0 podman[125576]: 2025-11-24 18:28:47.874704719 +0000 UTC m=+0.219882150 container died 655765b3cbafd88ea77b5bcd85327dd53d1886c908ee1d8d02ccf29c6124e107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hermann, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 18:28:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-0721ba48362adbb4cae5897964e05ae59a5cc9390f048e5e03baaca416331697-merged.mount: Deactivated successfully.
Nov 24 18:28:47 compute-0 podman[125576]: 2025-11-24 18:28:47.911483548 +0000 UTC m=+0.256660979 container remove 655765b3cbafd88ea77b5bcd85327dd53d1886c908ee1d8d02ccf29c6124e107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:28:47 compute-0 systemd[1]: libpod-conmon-655765b3cbafd88ea77b5bcd85327dd53d1886c908ee1d8d02ccf29c6124e107.scope: Deactivated successfully.
Nov 24 18:28:48 compute-0 podman[125615]: 2025-11-24 18:28:48.059939024 +0000 UTC m=+0.035094447 container create 8c89d8c6b8cfc0d4bf725515f1ed532ff426e1b7142e3d6bb420d41697082e2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:28:48 compute-0 systemd[1]: Started libpod-conmon-8c89d8c6b8cfc0d4bf725515f1ed532ff426e1b7142e3d6bb420d41697082e2c.scope.
Nov 24 18:28:48 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:28:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d9b68526bf1b9ed58cfdc2197a820e0d102d48ff785559311659c1333191e55/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:28:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d9b68526bf1b9ed58cfdc2197a820e0d102d48ff785559311659c1333191e55/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:28:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d9b68526bf1b9ed58cfdc2197a820e0d102d48ff785559311659c1333191e55/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:28:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d9b68526bf1b9ed58cfdc2197a820e0d102d48ff785559311659c1333191e55/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:28:48 compute-0 podman[125615]: 2025-11-24 18:28:48.118047705 +0000 UTC m=+0.093203148 container init 8c89d8c6b8cfc0d4bf725515f1ed532ff426e1b7142e3d6bb420d41697082e2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 24 18:28:48 compute-0 podman[125615]: 2025-11-24 18:28:48.126963257 +0000 UTC m=+0.102118680 container start 8c89d8c6b8cfc0d4bf725515f1ed532ff426e1b7142e3d6bb420d41697082e2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lehmann, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 18:28:48 compute-0 podman[125615]: 2025-11-24 18:28:48.133180022 +0000 UTC m=+0.108335475 container attach 8c89d8c6b8cfc0d4bf725515f1ed532ff426e1b7142e3d6bb420d41697082e2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 18:28:48 compute-0 ceph-mon[74927]: pgmap v387: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:48 compute-0 podman[125615]: 2025-11-24 18:28:48.044396186 +0000 UTC m=+0.019551629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:28:48 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v388: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:49 compute-0 dazzling_lehmann[125632]: {
Nov 24 18:28:49 compute-0 dazzling_lehmann[125632]:     "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 18:28:49 compute-0 dazzling_lehmann[125632]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:28:49 compute-0 dazzling_lehmann[125632]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 18:28:49 compute-0 dazzling_lehmann[125632]:         "osd_id": 0,
Nov 24 18:28:49 compute-0 dazzling_lehmann[125632]:         "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:28:49 compute-0 dazzling_lehmann[125632]:         "type": "bluestore"
Nov 24 18:28:49 compute-0 dazzling_lehmann[125632]:     },
Nov 24 18:28:49 compute-0 dazzling_lehmann[125632]:     "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 18:28:49 compute-0 dazzling_lehmann[125632]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:28:49 compute-0 dazzling_lehmann[125632]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 18:28:49 compute-0 dazzling_lehmann[125632]:         "osd_id": 1,
Nov 24 18:28:49 compute-0 dazzling_lehmann[125632]:         "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:28:49 compute-0 dazzling_lehmann[125632]:         "type": "bluestore"
Nov 24 18:28:49 compute-0 dazzling_lehmann[125632]:     },
Nov 24 18:28:49 compute-0 dazzling_lehmann[125632]:     "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 18:28:49 compute-0 dazzling_lehmann[125632]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:28:49 compute-0 dazzling_lehmann[125632]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 18:28:49 compute-0 dazzling_lehmann[125632]:         "osd_id": 2,
Nov 24 18:28:49 compute-0 dazzling_lehmann[125632]:         "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:28:49 compute-0 dazzling_lehmann[125632]:         "type": "bluestore"
Nov 24 18:28:49 compute-0 dazzling_lehmann[125632]:     }
Nov 24 18:28:49 compute-0 dazzling_lehmann[125632]: }
Nov 24 18:28:49 compute-0 systemd[1]: libpod-8c89d8c6b8cfc0d4bf725515f1ed532ff426e1b7142e3d6bb420d41697082e2c.scope: Deactivated successfully.
Nov 24 18:28:49 compute-0 podman[125615]: 2025-11-24 18:28:49.059917828 +0000 UTC m=+1.035073251 container died 8c89d8c6b8cfc0d4bf725515f1ed532ff426e1b7142e3d6bb420d41697082e2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lehmann, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:28:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d9b68526bf1b9ed58cfdc2197a820e0d102d48ff785559311659c1333191e55-merged.mount: Deactivated successfully.
Nov 24 18:28:49 compute-0 podman[125615]: 2025-11-24 18:28:49.125463064 +0000 UTC m=+1.100618497 container remove 8c89d8c6b8cfc0d4bf725515f1ed532ff426e1b7142e3d6bb420d41697082e2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 24 18:28:49 compute-0 systemd[1]: libpod-conmon-8c89d8c6b8cfc0d4bf725515f1ed532ff426e1b7142e3d6bb420d41697082e2c.scope: Deactivated successfully.
Nov 24 18:28:49 compute-0 sudo[125511]: pam_unix(sudo:session): session closed for user root
Nov 24 18:28:49 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:28:49 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:28:49 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:28:49 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:28:49 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 78fa0eff-b1b3-4ab6-9acc-68799d157e74 does not exist
Nov 24 18:28:49 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 3faf0d27-436b-47af-b68b-fb443192d04a does not exist
Nov 24 18:28:49 compute-0 sudo[125677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:28:49 compute-0 sudo[125677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:28:49 compute-0 sudo[125677]: pam_unix(sudo:session): session closed for user root
Nov 24 18:28:49 compute-0 sudo[125702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:28:49 compute-0 sudo[125702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:28:49 compute-0 sudo[125702]: pam_unix(sudo:session): session closed for user root
Nov 24 18:28:50 compute-0 ceph-mon[74927]: pgmap v388: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:50 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:28:50 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:28:50 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v389: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:52 compute-0 ceph-mon[74927]: pgmap v389: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:28:52 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v390: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:54 compute-0 ceph-mon[74927]: pgmap v390: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:54 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v391: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:56 compute-0 ceph-mon[74927]: pgmap v391: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:56 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v392: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:28:57 compute-0 sudo[124682]: pam_unix(sudo:session): session closed for user root
Nov 24 18:28:58 compute-0 ceph-mon[74927]: pgmap v392: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:58 compute-0 sudo[125876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deocwcfimaptromvzaxarafettqcpget ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008937.9192305-137-131019930470479/AnsiballZ_command.py'
Nov 24 18:28:58 compute-0 sudo[125876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:28:58 compute-0 python3.9[125878]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:28:58 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v393: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:59 compute-0 ceph-mon[74927]: pgmap v393: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:28:59 compute-0 sudo[125876]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:00 compute-0 sudo[126163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbnvjueldojjtjmcggoxheklkbhxeqlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008939.4778895-145-172648144117028/AnsiballZ_selinux.py'
Nov 24 18:29:00 compute-0 sudo[126163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:29:00 compute-0 python3.9[126165]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 24 18:29:00 compute-0 sudo[126163]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:00 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v394: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:01 compute-0 sudo[126315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvhllkgoocdqqjwsdcofcpntgotdnvak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008940.8039598-156-53914586515056/AnsiballZ_command.py'
Nov 24 18:29:01 compute-0 sudo[126315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:29:01 compute-0 python3.9[126317]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 24 18:29:01 compute-0 sudo[126315]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:01 compute-0 ceph-mon[74927]: pgmap v394: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:01 compute-0 anacron[30860]: Job `cron.daily' started
Nov 24 18:29:01 compute-0 anacron[30860]: Job `cron.daily' terminated
Nov 24 18:29:01 compute-0 sudo[126469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evapswtjnqgiapsajzjdemaweoyirgiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008941.536854-164-151652704792802/AnsiballZ_file.py'
Nov 24 18:29:01 compute-0 sudo[126469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:29:02 compute-0 python3.9[126471]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:29:02 compute-0 sudo[126469]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:29:02 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v395: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:02 compute-0 sudo[126621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dntpscssxwuuohnjgsfwexhvvmhazmom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008942.287237-172-6056261983198/AnsiballZ_mount.py'
Nov 24 18:29:02 compute-0 sudo[126621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:29:03 compute-0 python3.9[126623]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 24 18:29:03 compute-0 sudo[126621]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:03 compute-0 ceph-mon[74927]: pgmap v395: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:04 compute-0 sudo[126773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nctlbchdxtgtuekfevddynihmuovdarr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008943.8055954-200-163851378826838/AnsiballZ_file.py'
Nov 24 18:29:04 compute-0 sudo[126773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:29:04 compute-0 python3.9[126775]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:29:04 compute-0 sudo[126773]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:04 compute-0 sudo[126925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cspcoadksfexaeojsjjoqlfqotfssvdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008944.3839085-208-247123213329118/AnsiballZ_stat.py'
Nov 24 18:29:04 compute-0 sudo[126925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:29:04 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v396: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:29:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:29:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:29:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:29:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:29:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:29:04 compute-0 python3.9[126927]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:29:04 compute-0 sudo[126925]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:05 compute-0 sudo[127003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-habchdzovatfbirtmukofbgktmyfnfyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008944.3839085-208-247123213329118/AnsiballZ_file.py'
Nov 24 18:29:05 compute-0 sudo[127003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:29:05 compute-0 python3.9[127005]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:29:05 compute-0 sudo[127003]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:05 compute-0 ceph-mon[74927]: pgmap v396: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:05 compute-0 sudo[127155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvnxsbmmgdrtpeukwoidxolscrbfxpbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008945.6946185-229-235293765050514/AnsiballZ_stat.py'
Nov 24 18:29:06 compute-0 sudo[127155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:29:06 compute-0 python3.9[127157]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:29:06 compute-0 sudo[127155]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:06 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v397: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:07 compute-0 sudo[127309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofjxkzfjnxuqfmzsqjxvwllarpxrnbuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008946.6731465-242-221077684623160/AnsiballZ_getent.py'
Nov 24 18:29:07 compute-0 sudo[127309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:29:07 compute-0 python3.9[127311]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 24 18:29:07 compute-0 sudo[127309]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:29:07 compute-0 ceph-mon[74927]: pgmap v397: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:07 compute-0 sudo[127462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjcmzegrygwdhcbjigqvgnishjupmoqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008947.5379276-252-69595730503837/AnsiballZ_getent.py'
Nov 24 18:29:07 compute-0 sudo[127462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:29:07 compute-0 python3.9[127464]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 24 18:29:07 compute-0 sudo[127462]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:08 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v398: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:08 compute-0 sudo[127615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txsirokkefgkodvuhidqbddkudmwnavm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008948.160667-260-119660132992208/AnsiballZ_group.py'
Nov 24 18:29:08 compute-0 sudo[127615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:29:08 compute-0 python3.9[127617]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 18:29:08 compute-0 sudo[127615]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:09 compute-0 sudo[127767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djzslycnmizbwqlfhsjrbffvahunryzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008949.1027858-269-75318648355549/AnsiballZ_file.py'
Nov 24 18:29:09 compute-0 sudo[127767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:29:09 compute-0 python3.9[127769]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 24 18:29:09 compute-0 sudo[127767]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:09 compute-0 ceph-mon[74927]: pgmap v398: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:10 compute-0 sudo[127919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubhpgkyieypmjuymkzbfhqeuumulluon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008949.9626586-280-193614302359490/AnsiballZ_dnf.py'
Nov 24 18:29:10 compute-0 sudo[127919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:29:10 compute-0 python3.9[127921]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 18:29:10 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v399: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:11 compute-0 sudo[127919]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:11 compute-0 ceph-mon[74927]: pgmap v399: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:12 compute-0 sudo[128072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-maefgxbnsgywqbdvjekpdastekrbzcll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008951.9308655-288-107390613272992/AnsiballZ_file.py'
Nov 24 18:29:12 compute-0 sudo[128072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:29:12 compute-0 python3.9[128074]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:29:12 compute-0 sudo[128072]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:29:12 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v400: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:12 compute-0 sudo[128224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbwpsrljjfekcakxqundwhcpvyxcaoha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008952.6042314-296-112729506421089/AnsiballZ_stat.py'
Nov 24 18:29:12 compute-0 sudo[128224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:29:13 compute-0 python3.9[128226]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:29:13 compute-0 sudo[128224]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:13 compute-0 sudo[128302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imptydvvkwvlhraoezpzncsnsoruodzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008952.6042314-296-112729506421089/AnsiballZ_file.py'
Nov 24 18:29:13 compute-0 sudo[128302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:29:13 compute-0 python3.9[128304]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:29:13 compute-0 sudo[128302]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:13 compute-0 ceph-mon[74927]: pgmap v400: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:14 compute-0 sudo[128454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibjjxeilvqomatvtlqzxqzmwdiyrprxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008953.7343137-309-191363750675027/AnsiballZ_stat.py'
Nov 24 18:29:14 compute-0 sudo[128454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:29:14 compute-0 python3.9[128456]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:29:14 compute-0 sudo[128454]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:14 compute-0 sudo[128532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esicjajlmbdwitrhxtnngmbkjwxknlcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008953.7343137-309-191363750675027/AnsiballZ_file.py'
Nov 24 18:29:14 compute-0 sudo[128532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:29:14 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v401: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:14 compute-0 python3.9[128534]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:29:14 compute-0 sudo[128532]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:15 compute-0 sudo[128684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-caznlyjhxomttftiebhabbiovppntozp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008955.071302-324-221176059760581/AnsiballZ_dnf.py'
Nov 24 18:29:15 compute-0 sudo[128684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:29:15 compute-0 python3.9[128686]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 18:29:15 compute-0 ceph-mon[74927]: pgmap v401: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:16 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v402: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:16 compute-0 sudo[128684]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:29:17 compute-0 python3.9[128837]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:29:17 compute-0 ceph-mon[74927]: pgmap v402: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:18 compute-0 python3.9[128989]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 24 18:29:18 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v403: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:19 compute-0 python3.9[129139]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:29:20 compute-0 sudo[129289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uoqytnhsmgtzylzfvmcucrdjakfdmhhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008959.6684608-365-117585245635945/AnsiballZ_systemd.py'
Nov 24 18:29:20 compute-0 sudo[129289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:29:20 compute-0 ceph-mon[74927]: pgmap v403: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:20 compute-0 python3.9[129291]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:29:20 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v404: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:20 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 24 18:29:20 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Nov 24 18:29:20 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 24 18:29:20 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 24 18:29:20 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 24 18:29:21 compute-0 sudo[129289]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:21 compute-0 ceph-mon[74927]: pgmap v404: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:21 compute-0 python3.9[129453]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 24 18:29:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:29:22 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v405: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:23 compute-0 sudo[129603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prutfbtcqzlwgawelwavxbryagnyoeni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008963.4341552-422-158975630579376/AnsiballZ_systemd.py'
Nov 24 18:29:23 compute-0 sudo[129603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:29:23 compute-0 ceph-mon[74927]: pgmap v405: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:23 compute-0 python3.9[129605]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:29:24 compute-0 sudo[129603]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:24 compute-0 sudo[129757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzkpaybntlcabofqubpyrgcnugdxcfaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008964.1863132-422-55462764150877/AnsiballZ_systemd.py'
Nov 24 18:29:24 compute-0 sudo[129757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:29:24 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v406: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:24 compute-0 python3.9[129759]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:29:24 compute-0 sudo[129757]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:25 compute-0 sshd-session[122770]: Connection closed by 192.168.122.30 port 33920
Nov 24 18:29:25 compute-0 sshd-session[122767]: pam_unix(sshd:session): session closed for user zuul
Nov 24 18:29:25 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Nov 24 18:29:25 compute-0 systemd[1]: session-37.scope: Consumed 53.477s CPU time.
Nov 24 18:29:25 compute-0 systemd-logind[822]: Session 37 logged out. Waiting for processes to exit.
Nov 24 18:29:25 compute-0 systemd-logind[822]: Removed session 37.
Nov 24 18:29:25 compute-0 ceph-mon[74927]: pgmap v406: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:25 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Nov 24 18:29:25 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:29:25.728091) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 18:29:25 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Nov 24 18:29:25 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008965728141, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 700, "num_deletes": 251, "total_data_size": 872275, "memory_usage": 885096, "flush_reason": "Manual Compaction"}
Nov 24 18:29:25 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Nov 24 18:29:25 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008965736734, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 864559, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9023, "largest_seqno": 9722, "table_properties": {"data_size": 860913, "index_size": 1490, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7914, "raw_average_key_size": 18, "raw_value_size": 853616, "raw_average_value_size": 1999, "num_data_blocks": 69, "num_entries": 427, "num_filter_entries": 427, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008907, "oldest_key_time": 1764008907, "file_creation_time": 1764008965, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Nov 24 18:29:25 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 8678 microseconds, and 5338 cpu microseconds.
Nov 24 18:29:25 compute-0 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 18:29:25 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:29:25.736774) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 864559 bytes OK
Nov 24 18:29:25 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:29:25.736792) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Nov 24 18:29:25 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:29:25.738212) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Nov 24 18:29:25 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:29:25.738227) EVENT_LOG_v1 {"time_micros": 1764008965738222, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 18:29:25 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:29:25.738243) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 18:29:25 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 868636, prev total WAL file size 868636, number of live WAL files 2.
Nov 24 18:29:25 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:29:25 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:29:25.738845) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Nov 24 18:29:25 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 18:29:25 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(844KB)], [23(6627KB)]
Nov 24 18:29:25 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008965738867, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 7651586, "oldest_snapshot_seqno": -1}
Nov 24 18:29:25 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3312 keys, 6143227 bytes, temperature: kUnknown
Nov 24 18:29:25 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008965772728, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6143227, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6119099, "index_size": 14739, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8325, "raw_key_size": 80377, "raw_average_key_size": 24, "raw_value_size": 6057190, "raw_average_value_size": 1828, "num_data_blocks": 644, "num_entries": 3312, "num_filter_entries": 3312, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008324, "oldest_key_time": 0, "file_creation_time": 1764008965, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Nov 24 18:29:25 compute-0 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 18:29:25 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:29:25.772973) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6143227 bytes
Nov 24 18:29:25 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:29:25.774376) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 225.6 rd, 181.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 6.5 +0.0 blob) out(5.9 +0.0 blob), read-write-amplify(16.0) write-amplify(7.1) OK, records in: 3826, records dropped: 514 output_compression: NoCompression
Nov 24 18:29:25 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:29:25.774395) EVENT_LOG_v1 {"time_micros": 1764008965774386, "job": 8, "event": "compaction_finished", "compaction_time_micros": 33922, "compaction_time_cpu_micros": 17576, "output_level": 6, "num_output_files": 1, "total_output_size": 6143227, "num_input_records": 3826, "num_output_records": 3312, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 18:29:25 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:29:25 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008965774652, "job": 8, "event": "table_file_deletion", "file_number": 25}
Nov 24 18:29:25 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:29:25 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008965776089, "job": 8, "event": "table_file_deletion", "file_number": 23}
Nov 24 18:29:25 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:29:25.738750) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:29:25 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:29:25.776137) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:29:25 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:29:25.776143) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:29:25 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:29:25.776146) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:29:25 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:29:25.776150) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:29:25 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:29:25.776153) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:29:26 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v407: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:29:27 compute-0 ceph-mon[74927]: pgmap v407: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:28 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v408: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:29 compute-0 ceph-mon[74927]: pgmap v408: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:30 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v409: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:31 compute-0 sshd-session[129786]: Accepted publickey for zuul from 192.168.122.30 port 51102 ssh2: ECDSA SHA256:jrbRhTO0vQ5011EeK1ZbrK2vK+fcQyIDL9kUBqDHBYY
Nov 24 18:29:31 compute-0 systemd-logind[822]: New session 38 of user zuul.
Nov 24 18:29:31 compute-0 systemd[1]: Started Session 38 of User zuul.
Nov 24 18:29:31 compute-0 sshd-session[129786]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 18:29:31 compute-0 ceph-mon[74927]: pgmap v409: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:32 compute-0 python3.9[129939]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:29:32 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:29:32 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v410: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:33 compute-0 sudo[130093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fomoljgakjspyruedvzmuxppxbcakzxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008972.8487353-36-241890853354882/AnsiballZ_getent.py'
Nov 24 18:29:33 compute-0 sudo[130093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:29:33 compute-0 python3.9[130095]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 24 18:29:33 compute-0 sudo[130093]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:33 compute-0 ceph-mon[74927]: pgmap v410: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:34 compute-0 sudo[130246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjjxejjghqdxtuvgxbixalgyqxxvrsie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008973.8012714-48-162742389108703/AnsiballZ_setup.py'
Nov 24 18:29:34 compute-0 sudo[130246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:29:34 compute-0 python3.9[130248]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 18:29:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:29:34
Nov 24 18:29:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:29:34 compute-0 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 18:29:34 compute-0 ceph-mgr[75218]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', 'images', 'vms']
Nov 24 18:29:34 compute-0 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 18:29:34 compute-0 sudo[130246]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:34 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v411: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:29:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:29:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:29:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:29:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:29:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:29:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:29:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:29:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:29:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:29:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:29:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:29:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:29:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:29:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:29:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:29:35 compute-0 sudo[130330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suqhsyknkmekrfrndjegryvipnvqcfxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008973.8012714-48-162742389108703/AnsiballZ_dnf.py'
Nov 24 18:29:35 compute-0 sudo[130330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:29:35 compute-0 python3.9[130332]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 24 18:29:35 compute-0 ceph-mon[74927]: pgmap v411: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:36 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v412: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:36 compute-0 sudo[130330]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:37 compute-0 sudo[130483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzadjnkrusjfzxhgergufioqbbphkhpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008977.0240822-62-56238532604588/AnsiballZ_dnf.py'
Nov 24 18:29:37 compute-0 sudo[130483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:29:37 compute-0 python3.9[130485]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 18:29:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:29:37 compute-0 ceph-mon[74927]: pgmap v412: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:38 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v413: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:38 compute-0 sudo[130483]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:39 compute-0 sudo[130636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fixiyvxuswxfyxjdcaetktqitsmfvgei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008978.8814576-70-8852008981611/AnsiballZ_systemd.py'
Nov 24 18:29:39 compute-0 sudo[130636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:29:39 compute-0 python3.9[130638]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 18:29:39 compute-0 ceph-mon[74927]: pgmap v413: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:39 compute-0 sudo[130636]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:40 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v414: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:40 compute-0 python3.9[130791]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:29:41 compute-0 sudo[130941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqlpzuuctooxmtanolfnknrbhnkeavdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008981.144191-88-7104401513105/AnsiballZ_sefcontext.py'
Nov 24 18:29:41 compute-0 sudo[130941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:29:41 compute-0 ceph-mon[74927]: pgmap v414: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:41 compute-0 python3.9[130943]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 24 18:29:42 compute-0 sudo[130941]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:29:42 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v415: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:42 compute-0 python3.9[131093]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:29:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:29:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:29:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:29:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:29:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:29:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:29:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:29:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:29:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:29:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:29:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:29:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:29:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 18:29:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:29:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:29:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:29:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 18:29:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:29:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 18:29:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:29:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:29:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:29:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 18:29:43 compute-0 sudo[131249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eznpktvoadrkagzrtmgcezmrikafinqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008983.2170992-106-205355758147103/AnsiballZ_dnf.py'
Nov 24 18:29:43 compute-0 sudo[131249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:29:43 compute-0 python3.9[131251]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 18:29:43 compute-0 ceph-mon[74927]: pgmap v415: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:44 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v416: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:44 compute-0 sudo[131249]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:45 compute-0 sudo[131402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnmnuzekmhdxiilkwbgrhrmvtsplfrwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008985.0257945-114-164019393819718/AnsiballZ_command.py'
Nov 24 18:29:45 compute-0 sudo[131402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:29:45 compute-0 python3.9[131404]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:29:45 compute-0 ceph-mon[74927]: pgmap v416: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:46 compute-0 sudo[131402]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:46 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v417: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:47 compute-0 sudo[131689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcscsssmwauglkwgfrlcccacnfikhxmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008986.7217891-122-148127597487735/AnsiballZ_file.py'
Nov 24 18:29:47 compute-0 sudo[131689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:29:47 compute-0 python3.9[131691]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 24 18:29:47 compute-0 sudo[131689]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:29:47 compute-0 ceph-mon[74927]: pgmap v417: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:48 compute-0 python3.9[131841]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:29:48 compute-0 sudo[131993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpxjnrylpvcnalofdrvessvhqbmdqvkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008988.372134-138-247291020180720/AnsiballZ_dnf.py'
Nov 24 18:29:48 compute-0 sudo[131993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:29:48 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v418: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:48 compute-0 python3.9[131995]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 18:29:49 compute-0 sudo[131997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:29:49 compute-0 sudo[131997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:29:49 compute-0 sudo[131997]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:49 compute-0 sudo[132022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:29:49 compute-0 sudo[132022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:29:49 compute-0 sudo[132022]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:49 compute-0 sudo[132047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:29:49 compute-0 sudo[132047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:29:49 compute-0 sudo[132047]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:49 compute-0 sudo[132072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 24 18:29:49 compute-0 sudo[132072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:29:49 compute-0 sudo[132072]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:49 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:29:49 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:29:49 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:29:49 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:29:49 compute-0 ceph-mon[74927]: pgmap v418: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:49 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:29:49 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:29:49 compute-0 sudo[132116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:29:49 compute-0 sudo[132116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:29:49 compute-0 sudo[132116]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:49 compute-0 sudo[132141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:29:49 compute-0 sudo[132141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:29:49 compute-0 sudo[132141]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:50 compute-0 sudo[132166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:29:50 compute-0 sudo[132166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:29:50 compute-0 sudo[132166]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:50 compute-0 sudo[132191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 18:29:50 compute-0 sudo[132191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:29:50 compute-0 sudo[131993]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:50 compute-0 sudo[132191]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:50 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:29:50 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:29:50 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:29:50 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:29:50 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:29:50 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:29:50 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 12a61a0a-98cc-4cef-a06a-2c3909c2ade8 does not exist
Nov 24 18:29:50 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev c5b823b5-2d1b-4cfd-a46c-79e989ab1ec3 does not exist
Nov 24 18:29:50 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 00b462ac-eded-4e3b-93fd-f512c71144ee does not exist
Nov 24 18:29:50 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 18:29:50 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:29:50 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 18:29:50 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:29:50 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:29:50 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:29:50 compute-0 sudo[132419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlnktiogkusfdltlwepirjdgblugqsex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008990.399645-147-87151773966211/AnsiballZ_dnf.py'
Nov 24 18:29:50 compute-0 sudo[132370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:29:50 compute-0 sudo[132419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:29:50 compute-0 sudo[132370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:29:50 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v419: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:50 compute-0 sudo[132370]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:50 compute-0 sudo[132424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:29:50 compute-0 sudo[132424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:29:50 compute-0 sudo[132424]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:50 compute-0 sudo[132449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:29:50 compute-0 sudo[132449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:29:50 compute-0 sudo[132449]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:50 compute-0 sudo[132474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 18:29:50 compute-0 sudo[132474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:29:50 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:29:50 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:29:50 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:29:50 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:29:50 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:29:50 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:29:50 compute-0 python3.9[132422]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 18:29:51 compute-0 podman[132540]: 2025-11-24 18:29:51.121785139 +0000 UTC m=+0.052791156 container create 614a462754560c58e7bd367c970a2ef6682bd3b1dfce5e168cc0d1c09b762710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:29:51 compute-0 systemd[1]: Started libpod-conmon-614a462754560c58e7bd367c970a2ef6682bd3b1dfce5e168cc0d1c09b762710.scope.
Nov 24 18:29:51 compute-0 podman[132540]: 2025-11-24 18:29:51.093539197 +0000 UTC m=+0.024545264 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:29:51 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:29:51 compute-0 podman[132540]: 2025-11-24 18:29:51.214601722 +0000 UTC m=+0.145607739 container init 614a462754560c58e7bd367c970a2ef6682bd3b1dfce5e168cc0d1c09b762710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 18:29:51 compute-0 podman[132540]: 2025-11-24 18:29:51.225637228 +0000 UTC m=+0.156643215 container start 614a462754560c58e7bd367c970a2ef6682bd3b1dfce5e168cc0d1c09b762710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:29:51 compute-0 podman[132540]: 2025-11-24 18:29:51.229087432 +0000 UTC m=+0.160093429 container attach 614a462754560c58e7bd367c970a2ef6682bd3b1dfce5e168cc0d1c09b762710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 24 18:29:51 compute-0 funny_mahavira[132557]: 167 167
Nov 24 18:29:51 compute-0 podman[132540]: 2025-11-24 18:29:51.230784453 +0000 UTC m=+0.161790440 container died 614a462754560c58e7bd367c970a2ef6682bd3b1dfce5e168cc0d1c09b762710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 24 18:29:51 compute-0 systemd[1]: libpod-614a462754560c58e7bd367c970a2ef6682bd3b1dfce5e168cc0d1c09b762710.scope: Deactivated successfully.
Nov 24 18:29:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-81ad6d95006f968323db6b4f545c641c5c38b8cbfb6a4e1cfef9ac53f0800580-merged.mount: Deactivated successfully.
Nov 24 18:29:51 compute-0 podman[132540]: 2025-11-24 18:29:51.274753145 +0000 UTC m=+0.205759132 container remove 614a462754560c58e7bd367c970a2ef6682bd3b1dfce5e168cc0d1c09b762710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:29:51 compute-0 systemd[1]: libpod-conmon-614a462754560c58e7bd367c970a2ef6682bd3b1dfce5e168cc0d1c09b762710.scope: Deactivated successfully.
Nov 24 18:29:51 compute-0 podman[132582]: 2025-11-24 18:29:51.477555415 +0000 UTC m=+0.059590391 container create 56bb7c5365aed8ba0288c82650ce5f5c0acd2bb5f62e4c440bc617575b33e62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_kirch, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 24 18:29:51 compute-0 systemd[1]: Started libpod-conmon-56bb7c5365aed8ba0288c82650ce5f5c0acd2bb5f62e4c440bc617575b33e62b.scope.
Nov 24 18:29:51 compute-0 podman[132582]: 2025-11-24 18:29:51.45211723 +0000 UTC m=+0.034152296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:29:51 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:29:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dcc3a35ef3c5d8f08939d7564e1b60751df759f18ec252a970d092aec3682f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:29:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dcc3a35ef3c5d8f08939d7564e1b60751df759f18ec252a970d092aec3682f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:29:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dcc3a35ef3c5d8f08939d7564e1b60751df759f18ec252a970d092aec3682f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:29:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dcc3a35ef3c5d8f08939d7564e1b60751df759f18ec252a970d092aec3682f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:29:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dcc3a35ef3c5d8f08939d7564e1b60751df759f18ec252a970d092aec3682f5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:29:51 compute-0 podman[132582]: 2025-11-24 18:29:51.619234507 +0000 UTC m=+0.201269543 container init 56bb7c5365aed8ba0288c82650ce5f5c0acd2bb5f62e4c440bc617575b33e62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 18:29:51 compute-0 podman[132582]: 2025-11-24 18:29:51.627572719 +0000 UTC m=+0.209607745 container start 56bb7c5365aed8ba0288c82650ce5f5c0acd2bb5f62e4c440bc617575b33e62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_kirch, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:29:51 compute-0 podman[132582]: 2025-11-24 18:29:51.644935778 +0000 UTC m=+0.226970774 container attach 56bb7c5365aed8ba0288c82650ce5f5c0acd2bb5f62e4c440bc617575b33e62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_kirch, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 18:29:51 compute-0 ceph-mon[74927]: pgmap v419: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:52 compute-0 sudo[132419]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:29:52 compute-0 trusting_kirch[132599]: --> passed data devices: 0 physical, 3 LVM
Nov 24 18:29:52 compute-0 trusting_kirch[132599]: --> relative data size: 1.0
Nov 24 18:29:52 compute-0 trusting_kirch[132599]: --> All data devices are unavailable
Nov 24 18:29:52 compute-0 systemd[1]: libpod-56bb7c5365aed8ba0288c82650ce5f5c0acd2bb5f62e4c440bc617575b33e62b.scope: Deactivated successfully.
Nov 24 18:29:52 compute-0 podman[132582]: 2025-11-24 18:29:52.600024563 +0000 UTC m=+1.182059549 container died 56bb7c5365aed8ba0288c82650ce5f5c0acd2bb5f62e4c440bc617575b33e62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_kirch, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:29:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-3dcc3a35ef3c5d8f08939d7564e1b60751df759f18ec252a970d092aec3682f5-merged.mount: Deactivated successfully.
Nov 24 18:29:52 compute-0 podman[132582]: 2025-11-24 18:29:52.64795394 +0000 UTC m=+1.229988926 container remove 56bb7c5365aed8ba0288c82650ce5f5c0acd2bb5f62e4c440bc617575b33e62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:29:52 compute-0 systemd[1]: libpod-conmon-56bb7c5365aed8ba0288c82650ce5f5c0acd2bb5f62e4c440bc617575b33e62b.scope: Deactivated successfully.
Nov 24 18:29:52 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v420: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:52 compute-0 sudo[132474]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:52 compute-0 sudo[132752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:29:52 compute-0 sudo[132752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:29:52 compute-0 sudo[132752]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:52 compute-0 sudo[132798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:29:52 compute-0 sudo[132835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anwispsjsccfwzlbmugpvpdyserfvwya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008992.4895153-159-146004033423934/AnsiballZ_stat.py'
Nov 24 18:29:52 compute-0 sudo[132798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:29:52 compute-0 sudo[132835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:29:52 compute-0 sudo[132798]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:52 compute-0 sudo[132842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:29:52 compute-0 sudo[132842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:29:52 compute-0 sudo[132842]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:52 compute-0 sudo[132867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- lvm list --format json
Nov 24 18:29:52 compute-0 sudo[132867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:29:53 compute-0 python3.9[132841]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:29:53 compute-0 sudo[132835]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:53 compute-0 podman[133011]: 2025-11-24 18:29:53.364221054 +0000 UTC m=+0.050798838 container create 45a9b91b02d889422736965960f487a3fd6b4ae5973f8df0d23406e4943bc00a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_cartwright, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 18:29:53 compute-0 systemd[1]: Started libpod-conmon-45a9b91b02d889422736965960f487a3fd6b4ae5973f8df0d23406e4943bc00a.scope.
Nov 24 18:29:53 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:29:53 compute-0 podman[133011]: 2025-11-24 18:29:53.348143945 +0000 UTC m=+0.034721729 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:29:53 compute-0 podman[133011]: 2025-11-24 18:29:53.44889589 +0000 UTC m=+0.135473714 container init 45a9b91b02d889422736965960f487a3fd6b4ae5973f8df0d23406e4943bc00a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_cartwright, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:29:53 compute-0 podman[133011]: 2025-11-24 18:29:53.456574095 +0000 UTC m=+0.143151859 container start 45a9b91b02d889422736965960f487a3fd6b4ae5973f8df0d23406e4943bc00a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_cartwright, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:29:53 compute-0 podman[133011]: 2025-11-24 18:29:53.460353776 +0000 UTC m=+0.146931590 container attach 45a9b91b02d889422736965960f487a3fd6b4ae5973f8df0d23406e4943bc00a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:29:53 compute-0 tender_cartwright[133027]: 167 167
Nov 24 18:29:53 compute-0 systemd[1]: libpod-45a9b91b02d889422736965960f487a3fd6b4ae5973f8df0d23406e4943bc00a.scope: Deactivated successfully.
Nov 24 18:29:53 compute-0 podman[133011]: 2025-11-24 18:29:53.464603049 +0000 UTC m=+0.151180833 container died 45a9b91b02d889422736965960f487a3fd6b4ae5973f8df0d23406e4943bc00a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_cartwright, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 24 18:29:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9e3a0849ba90a9678715de863173b0d666416fe4147598fc077cea570608201-merged.mount: Deactivated successfully.
Nov 24 18:29:53 compute-0 podman[133011]: 2025-11-24 18:29:53.513762017 +0000 UTC m=+0.200339811 container remove 45a9b91b02d889422736965960f487a3fd6b4ae5973f8df0d23406e4943bc00a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_cartwright, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:29:53 compute-0 systemd[1]: libpod-conmon-45a9b91b02d889422736965960f487a3fd6b4ae5973f8df0d23406e4943bc00a.scope: Deactivated successfully.
Nov 24 18:29:53 compute-0 sudo[133134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guxewcpwcvowrkvmjrzlyjekyicowvte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764008993.2241561-167-96555096381723/AnsiballZ_slurp.py'
Nov 24 18:29:53 compute-0 sudo[133134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:29:53 compute-0 podman[133110]: 2025-11-24 18:29:53.708089401 +0000 UTC m=+0.047982870 container create 53242f617ed147432dfaae7464c06a0995331fff17a3d94ba63e153889f5f3cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:29:53 compute-0 systemd[1]: Started libpod-conmon-53242f617ed147432dfaae7464c06a0995331fff17a3d94ba63e153889f5f3cd.scope.
Nov 24 18:29:53 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:29:53 compute-0 podman[133110]: 2025-11-24 18:29:53.68941501 +0000 UTC m=+0.029308569 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:29:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02a36a02d798454e09198ac0ad15defe793fb784a0a63dbd764f056cb9f1534a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:29:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02a36a02d798454e09198ac0ad15defe793fb784a0a63dbd764f056cb9f1534a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:29:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02a36a02d798454e09198ac0ad15defe793fb784a0a63dbd764f056cb9f1534a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:29:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02a36a02d798454e09198ac0ad15defe793fb784a0a63dbd764f056cb9f1534a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:29:53 compute-0 podman[133110]: 2025-11-24 18:29:53.79617529 +0000 UTC m=+0.136068759 container init 53242f617ed147432dfaae7464c06a0995331fff17a3d94ba63e153889f5f3cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 18:29:53 compute-0 podman[133110]: 2025-11-24 18:29:53.804022119 +0000 UTC m=+0.143915588 container start 53242f617ed147432dfaae7464c06a0995331fff17a3d94ba63e153889f5f3cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 18:29:53 compute-0 podman[133110]: 2025-11-24 18:29:53.80735267 +0000 UTC m=+0.147246139 container attach 53242f617ed147432dfaae7464c06a0995331fff17a3d94ba63e153889f5f3cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 18:29:53 compute-0 ceph-mon[74927]: pgmap v420: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:53 compute-0 python3.9[133136]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Nov 24 18:29:53 compute-0 sudo[133134]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]: {
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:     "0": [
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:         {
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:             "devices": [
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:                 "/dev/loop3"
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:             ],
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:             "lv_name": "ceph_lv0",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:             "lv_size": "21470642176",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:             "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:             "name": "ceph_lv0",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:             "tags": {
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:                 "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:                 "ceph.cluster_name": "ceph",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:                 "ceph.crush_device_class": "",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:                 "ceph.encrypted": "0",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:                 "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:                 "ceph.osd_id": "0",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:                 "ceph.type": "block",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:                 "ceph.vdo": "0"
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:             },
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:             "type": "block",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:             "vg_name": "ceph_vg0"
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:         }
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:     ],
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:     "1": [
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:         {
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:             "devices": [
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:                 "/dev/loop4"
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:             ],
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:             "lv_name": "ceph_lv1",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:             "lv_size": "21470642176",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:             "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:             "name": "ceph_lv1",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:             "tags": {
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:                 "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:                 "ceph.cluster_name": "ceph",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:                 "ceph.crush_device_class": "",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:                 "ceph.encrypted": "0",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:                 "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:                 "ceph.osd_id": "1",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:                 "ceph.type": "block",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:                 "ceph.vdo": "0"
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:             },
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:             "type": "block",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:             "vg_name": "ceph_vg1"
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:         }
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:     ],
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:     "2": [
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:         {
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:             "devices": [
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:                 "/dev/loop5"
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:             ],
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:             "lv_name": "ceph_lv2",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:             "lv_size": "21470642176",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:             "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:             "name": "ceph_lv2",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:             "tags": {
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:                 "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:                 "ceph.cluster_name": "ceph",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:                 "ceph.crush_device_class": "",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:                 "ceph.encrypted": "0",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:                 "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:                 "ceph.osd_id": "2",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:                 "ceph.type": "block",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:                 "ceph.vdo": "0"
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:             },
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:             "type": "block",
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:             "vg_name": "ceph_vg2"
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:         }
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]:     ]
Nov 24 18:29:54 compute-0 recursing_vaughan[133142]: }
Nov 24 18:29:54 compute-0 systemd[1]: libpod-53242f617ed147432dfaae7464c06a0995331fff17a3d94ba63e153889f5f3cd.scope: Deactivated successfully.
Nov 24 18:29:54 compute-0 podman[133110]: 2025-11-24 18:29:54.536994727 +0000 UTC m=+0.876888196 container died 53242f617ed147432dfaae7464c06a0995331fff17a3d94ba63e153889f5f3cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_vaughan, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 18:29:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-02a36a02d798454e09198ac0ad15defe793fb784a0a63dbd764f056cb9f1534a-merged.mount: Deactivated successfully.
Nov 24 18:29:54 compute-0 podman[133110]: 2025-11-24 18:29:54.647296472 +0000 UTC m=+0.987189941 container remove 53242f617ed147432dfaae7464c06a0995331fff17a3d94ba63e153889f5f3cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_vaughan, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 18:29:54 compute-0 systemd[1]: libpod-conmon-53242f617ed147432dfaae7464c06a0995331fff17a3d94ba63e153889f5f3cd.scope: Deactivated successfully.
Nov 24 18:29:54 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v421: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:54 compute-0 sudo[132867]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:54 compute-0 sudo[133189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:29:54 compute-0 sudo[133189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:29:54 compute-0 sudo[133189]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:54 compute-0 sudo[133214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:29:54 compute-0 sudo[133214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:29:54 compute-0 sudo[133214]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:54 compute-0 sudo[133239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:29:54 compute-0 sudo[133239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:29:54 compute-0 sudo[133239]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:54 compute-0 sshd-session[129789]: Connection closed by 192.168.122.30 port 51102
Nov 24 18:29:54 compute-0 sudo[133264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- raw list --format json
Nov 24 18:29:54 compute-0 sshd-session[129786]: pam_unix(sshd:session): session closed for user zuul
Nov 24 18:29:54 compute-0 sudo[133264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:29:54 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Nov 24 18:29:54 compute-0 systemd[1]: session-38.scope: Consumed 17.604s CPU time.
Nov 24 18:29:54 compute-0 systemd-logind[822]: Session 38 logged out. Waiting for processes to exit.
Nov 24 18:29:54 compute-0 systemd-logind[822]: Removed session 38.
Nov 24 18:29:55 compute-0 podman[133327]: 2025-11-24 18:29:55.17620961 +0000 UTC m=+0.032462755 container create d6ffae72d6ff8daac0014446fbe794661295f8c2d27c4fbb7765c86561f722aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_jang, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:29:55 compute-0 systemd[1]: Started libpod-conmon-d6ffae72d6ff8daac0014446fbe794661295f8c2d27c4fbb7765c86561f722aa.scope.
Nov 24 18:29:55 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:29:55 compute-0 podman[133327]: 2025-11-24 18:29:55.244843538 +0000 UTC m=+0.101096703 container init d6ffae72d6ff8daac0014446fbe794661295f8c2d27c4fbb7765c86561f722aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:29:55 compute-0 podman[133327]: 2025-11-24 18:29:55.250985647 +0000 UTC m=+0.107238792 container start d6ffae72d6ff8daac0014446fbe794661295f8c2d27c4fbb7765c86561f722aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:29:55 compute-0 podman[133327]: 2025-11-24 18:29:55.253773094 +0000 UTC m=+0.110026259 container attach d6ffae72d6ff8daac0014446fbe794661295f8c2d27c4fbb7765c86561f722aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 18:29:55 compute-0 vibrant_jang[133344]: 167 167
Nov 24 18:29:55 compute-0 systemd[1]: libpod-d6ffae72d6ff8daac0014446fbe794661295f8c2d27c4fbb7765c86561f722aa.scope: Deactivated successfully.
Nov 24 18:29:55 compute-0 podman[133327]: 2025-11-24 18:29:55.255383653 +0000 UTC m=+0.111636798 container died d6ffae72d6ff8daac0014446fbe794661295f8c2d27c4fbb7765c86561f722aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_jang, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 18:29:55 compute-0 podman[133327]: 2025-11-24 18:29:55.161060844 +0000 UTC m=+0.017314009 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:29:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd6eb7f4a9fff75f50879d39516b2693be4fcb255477063fa44d8e428d1fad92-merged.mount: Deactivated successfully.
Nov 24 18:29:55 compute-0 podman[133327]: 2025-11-24 18:29:55.304463849 +0000 UTC m=+0.160716994 container remove d6ffae72d6ff8daac0014446fbe794661295f8c2d27c4fbb7765c86561f722aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_jang, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:29:55 compute-0 systemd[1]: libpod-conmon-d6ffae72d6ff8daac0014446fbe794661295f8c2d27c4fbb7765c86561f722aa.scope: Deactivated successfully.
Nov 24 18:29:55 compute-0 podman[133371]: 2025-11-24 18:29:55.453438208 +0000 UTC m=+0.040503340 container create d301a9675e6c597217d353ef53aff158bf69e578b0108110cbbeb60e4f69a8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 18:29:55 compute-0 systemd[1]: Started libpod-conmon-d301a9675e6c597217d353ef53aff158bf69e578b0108110cbbeb60e4f69a8ca.scope.
Nov 24 18:29:55 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:29:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7c82043014fd0b342a3e8430c446b165b32548cde65697880f823c1d2cde83c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:29:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7c82043014fd0b342a3e8430c446b165b32548cde65697880f823c1d2cde83c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:29:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7c82043014fd0b342a3e8430c446b165b32548cde65697880f823c1d2cde83c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:29:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7c82043014fd0b342a3e8430c446b165b32548cde65697880f823c1d2cde83c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:29:55 compute-0 podman[133371]: 2025-11-24 18:29:55.517952176 +0000 UTC m=+0.105017298 container init d301a9675e6c597217d353ef53aff158bf69e578b0108110cbbeb60e4f69a8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 18:29:55 compute-0 podman[133371]: 2025-11-24 18:29:55.523447989 +0000 UTC m=+0.110513131 container start d301a9675e6c597217d353ef53aff158bf69e578b0108110cbbeb60e4f69a8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_wozniak, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Nov 24 18:29:55 compute-0 podman[133371]: 2025-11-24 18:29:55.527378944 +0000 UTC m=+0.114444076 container attach d301a9675e6c597217d353ef53aff158bf69e578b0108110cbbeb60e4f69a8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_wozniak, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:29:55 compute-0 podman[133371]: 2025-11-24 18:29:55.433951457 +0000 UTC m=+0.021016589 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:29:55 compute-0 ceph-mon[74927]: pgmap v421: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:56 compute-0 confident_wozniak[133387]: {
Nov 24 18:29:56 compute-0 confident_wozniak[133387]:     "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 18:29:56 compute-0 confident_wozniak[133387]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:29:56 compute-0 confident_wozniak[133387]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 18:29:56 compute-0 confident_wozniak[133387]:         "osd_id": 0,
Nov 24 18:29:56 compute-0 confident_wozniak[133387]:         "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:29:56 compute-0 confident_wozniak[133387]:         "type": "bluestore"
Nov 24 18:29:56 compute-0 confident_wozniak[133387]:     },
Nov 24 18:29:56 compute-0 confident_wozniak[133387]:     "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 18:29:56 compute-0 confident_wozniak[133387]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:29:56 compute-0 confident_wozniak[133387]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 18:29:56 compute-0 confident_wozniak[133387]:         "osd_id": 1,
Nov 24 18:29:56 compute-0 confident_wozniak[133387]:         "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:29:56 compute-0 confident_wozniak[133387]:         "type": "bluestore"
Nov 24 18:29:56 compute-0 confident_wozniak[133387]:     },
Nov 24 18:29:56 compute-0 confident_wozniak[133387]:     "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 18:29:56 compute-0 confident_wozniak[133387]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:29:56 compute-0 confident_wozniak[133387]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 18:29:56 compute-0 confident_wozniak[133387]:         "osd_id": 2,
Nov 24 18:29:56 compute-0 confident_wozniak[133387]:         "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:29:56 compute-0 confident_wozniak[133387]:         "type": "bluestore"
Nov 24 18:29:56 compute-0 confident_wozniak[133387]:     }
Nov 24 18:29:56 compute-0 confident_wozniak[133387]: }
Nov 24 18:29:56 compute-0 systemd[1]: libpod-d301a9675e6c597217d353ef53aff158bf69e578b0108110cbbeb60e4f69a8ca.scope: Deactivated successfully.
Nov 24 18:29:56 compute-0 podman[133371]: 2025-11-24 18:29:56.480355606 +0000 UTC m=+1.067420728 container died d301a9675e6c597217d353ef53aff158bf69e578b0108110cbbeb60e4f69a8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_wozniak, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 18:29:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7c82043014fd0b342a3e8430c446b165b32548cde65697880f823c1d2cde83c-merged.mount: Deactivated successfully.
Nov 24 18:29:56 compute-0 podman[133371]: 2025-11-24 18:29:56.535874988 +0000 UTC m=+1.122940120 container remove d301a9675e6c597217d353ef53aff158bf69e578b0108110cbbeb60e4f69a8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_wozniak, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:29:56 compute-0 systemd[1]: libpod-conmon-d301a9675e6c597217d353ef53aff158bf69e578b0108110cbbeb60e4f69a8ca.scope: Deactivated successfully.
Nov 24 18:29:56 compute-0 sudo[133264]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:56 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:29:56 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:29:56 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:29:56 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:29:56 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev a09f657f-ee9a-4e5b-ba35-ad49ce1a5968 does not exist
Nov 24 18:29:56 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 4e0b5366-6db9-4c11-91f3-2e8e92e75fdf does not exist
Nov 24 18:29:56 compute-0 sudo[133435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:29:56 compute-0 sudo[133435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:29:56 compute-0 sudo[133435]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:56 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v422: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:56 compute-0 sudo[133460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:29:56 compute-0 sudo[133460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:29:56 compute-0 sudo[133460]: pam_unix(sudo:session): session closed for user root
Nov 24 18:29:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:29:57 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:29:57 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:29:57 compute-0 ceph-mon[74927]: pgmap v422: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:29:59 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v423: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:00 compute-0 ceph-mon[74927]: pgmap v423: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:00 compute-0 sshd-session[133485]: Accepted publickey for zuul from 192.168.122.30 port 50646 ssh2: ECDSA SHA256:jrbRhTO0vQ5011EeK1ZbrK2vK+fcQyIDL9kUBqDHBYY
Nov 24 18:30:00 compute-0 systemd-logind[822]: New session 39 of user zuul.
Nov 24 18:30:00 compute-0 systemd[1]: Started Session 39 of User zuul.
Nov 24 18:30:00 compute-0 sshd-session[133485]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 18:30:01 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v424: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:01 compute-0 python3.9[133638]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:30:02 compute-0 ceph-mon[74927]: pgmap v424: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:02 compute-0 python3.9[133792]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 18:30:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:30:03 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v425: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:03 compute-0 python3.9[133985]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:30:03 compute-0 sshd-session[133488]: Connection closed by 192.168.122.30 port 50646
Nov 24 18:30:03 compute-0 sshd-session[133485]: pam_unix(sshd:session): session closed for user zuul
Nov 24 18:30:03 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Nov 24 18:30:03 compute-0 systemd[1]: session-39.scope: Consumed 2.154s CPU time.
Nov 24 18:30:03 compute-0 systemd-logind[822]: Session 39 logged out. Waiting for processes to exit.
Nov 24 18:30:03 compute-0 systemd-logind[822]: Removed session 39.
Nov 24 18:30:04 compute-0 ceph-mon[74927]: pgmap v425: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:30:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:30:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:30:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:30:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:30:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:30:05 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v426: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:05 compute-0 ceph-mon[74927]: pgmap v426: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:07 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v427: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:30:08 compute-0 ceph-mon[74927]: pgmap v427: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:09 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v428: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:09 compute-0 sshd-session[134012]: Accepted publickey for zuul from 192.168.122.30 port 33120 ssh2: ECDSA SHA256:jrbRhTO0vQ5011EeK1ZbrK2vK+fcQyIDL9kUBqDHBYY
Nov 24 18:30:09 compute-0 systemd-logind[822]: New session 40 of user zuul.
Nov 24 18:30:09 compute-0 systemd[1]: Started Session 40 of User zuul.
Nov 24 18:30:09 compute-0 sshd-session[134012]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 18:30:10 compute-0 ceph-mon[74927]: pgmap v428: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:10 compute-0 python3.9[134165]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:30:11 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v429: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:11 compute-0 ceph-mon[74927]: pgmap v429: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:11 compute-0 python3.9[134319]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:30:12 compute-0 sudo[134473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjjhhghltvzzgwvopvwqxsknwsxcacbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009011.6957283-40-4490033701737/AnsiballZ_setup.py'
Nov 24 18:30:12 compute-0 sudo[134473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:30:12 compute-0 python3.9[134475]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 18:30:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:30:12 compute-0 sudo[134473]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:13 compute-0 sudo[134557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpdiscujssavpdwumbdiuhsggpjslfnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009011.6957283-40-4490033701737/AnsiballZ_dnf.py'
Nov 24 18:30:13 compute-0 sudo[134557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:30:13 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v430: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:13 compute-0 python3.9[134559]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 18:30:14 compute-0 ceph-mon[74927]: pgmap v430: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:14 compute-0 sudo[134557]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:15 compute-0 sudo[134710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qktlqucylcpuylmtagwjvrhbeqkerxkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009014.8000562-52-12705404504911/AnsiballZ_setup.py'
Nov 24 18:30:15 compute-0 sudo[134710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:30:15 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v431: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:15 compute-0 python3.9[134712]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 18:30:15 compute-0 sudo[134710]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:16 compute-0 ceph-mon[74927]: pgmap v431: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:16 compute-0 sudo[134905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ieevxvbksizgxfskupjdtangxyhwvjms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009015.8580115-63-122880108359478/AnsiballZ_file.py'
Nov 24 18:30:16 compute-0 sudo[134905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:30:16 compute-0 python3.9[134907]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:30:16 compute-0 sudo[134905]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:17 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v432: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:17 compute-0 ceph-mon[74927]: pgmap v432: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:17 compute-0 sudo[135057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrdcuipwgjovsoqndjvovmfsrywhgpia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009016.7943444-71-240792928434056/AnsiballZ_command.py'
Nov 24 18:30:17 compute-0 sudo[135057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:30:17 compute-0 python3.9[135059]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:30:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:30:17 compute-0 sudo[135057]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:18 compute-0 sudo[135222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddhrljuvpqwmsdoovxhpnsewhhwfprbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009017.7390075-79-233701191681666/AnsiballZ_stat.py'
Nov 24 18:30:18 compute-0 sudo[135222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:30:18 compute-0 python3.9[135224]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:30:18 compute-0 sudo[135222]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:18 compute-0 sudo[135300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctuxjryebcfcgpqlyrfvblradjegtaph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009017.7390075-79-233701191681666/AnsiballZ_file.py'
Nov 24 18:30:18 compute-0 sudo[135300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:30:18 compute-0 python3.9[135302]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:30:18 compute-0 sudo[135300]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:19 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v433: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:19 compute-0 sudo[135452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcrjwfdfurkwtcfmwkvarsypggmzsfxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009019.0802734-91-280037283721163/AnsiballZ_stat.py'
Nov 24 18:30:19 compute-0 sudo[135452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:30:19 compute-0 python3.9[135454]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:30:19 compute-0 sudo[135452]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:19 compute-0 sudo[135530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sethhigrtzrnlcjidbvuczgmegucehmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009019.0802734-91-280037283721163/AnsiballZ_file.py'
Nov 24 18:30:19 compute-0 sudo[135530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:30:19 compute-0 python3.9[135532]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:30:20 compute-0 sudo[135530]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:20 compute-0 ceph-mon[74927]: pgmap v433: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:20 compute-0 sudo[135682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lurgkzuwmwqhoasvyvtwiejsnvrweccy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009020.1913383-104-236385566103455/AnsiballZ_ini_file.py'
Nov 24 18:30:20 compute-0 sudo[135682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:30:20 compute-0 python3.9[135684]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:30:20 compute-0 sudo[135682]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:21 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v434: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:21 compute-0 sudo[135834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldgmjrvpdgadkoyhoxoqbvoqtjyzbfso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009021.0095932-104-72375964425734/AnsiballZ_ini_file.py'
Nov 24 18:30:21 compute-0 sudo[135834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:30:21 compute-0 python3.9[135836]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:30:21 compute-0 sudo[135834]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:21 compute-0 sudo[135986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecjjvfnketzyaszipqyfgggwfvpepnod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009021.6068797-104-244111063038486/AnsiballZ_ini_file.py'
Nov 24 18:30:21 compute-0 sudo[135986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:30:22 compute-0 python3.9[135988]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:30:22 compute-0 sudo[135986]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:22 compute-0 ceph-mon[74927]: pgmap v434: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:22 compute-0 sudo[136138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epattlptyazlyfpansyzjjglwdhadpvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009022.2032974-104-35254374387823/AnsiballZ_ini_file.py'
Nov 24 18:30:22 compute-0 sudo[136138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:30:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:30:22 compute-0 python3.9[136140]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:30:22 compute-0 sudo[136138]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:23 compute-0 sudo[136290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcozvinlsachdferpxcbgmbkepqyzmrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009022.9010077-135-6236786064330/AnsiballZ_dnf.py'
Nov 24 18:30:23 compute-0 sudo[136290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:30:23 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v435: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:23 compute-0 python3.9[136292]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 18:30:24 compute-0 ceph-mon[74927]: pgmap v435: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:24 compute-0 sudo[136290]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:25 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v436: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:25 compute-0 ceph-mon[74927]: pgmap v436: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:25 compute-0 sudo[136443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhlazwowazsyffezmhdbdbcmyapwrktr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009025.1393247-146-102051136524350/AnsiballZ_setup.py'
Nov 24 18:30:25 compute-0 sudo[136443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:30:25 compute-0 python3.9[136445]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:30:25 compute-0 sudo[136443]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:26 compute-0 sudo[136597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snzjemjwawqzqztgnktaufenppcenvgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009025.8517218-154-237339555174268/AnsiballZ_stat.py'
Nov 24 18:30:26 compute-0 sudo[136597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:30:26 compute-0 python3.9[136599]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:30:26 compute-0 sudo[136597]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:26 compute-0 sudo[136749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfavgxrtseshsaqqxxnyervqyyspcyii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009026.4709628-163-104418049089484/AnsiballZ_stat.py'
Nov 24 18:30:26 compute-0 sudo[136749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:30:26 compute-0 python3.9[136751]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:30:26 compute-0 sudo[136749]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:27 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v437: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:27 compute-0 sudo[136901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dehlycwgkvlhudyixputylsrqerlyiyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009027.195119-173-54978445787238/AnsiballZ_command.py'
Nov 24 18:30:27 compute-0 sudo[136901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:30:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:30:27 compute-0 python3.9[136903]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:30:27 compute-0 sudo[136901]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:28 compute-0 ceph-mon[74927]: pgmap v437: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:28 compute-0 sudo[137055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyfhakgduottjuanrvbowhumzvvyntok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009027.966631-183-231688763083506/AnsiballZ_service_facts.py'
Nov 24 18:30:28 compute-0 sudo[137055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:30:28 compute-0 python3.9[137057]: ansible-service_facts Invoked
Nov 24 18:30:28 compute-0 network[137074]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 18:30:28 compute-0 network[137075]: 'network-scripts' will be removed from distribution in near future.
Nov 24 18:30:28 compute-0 network[137076]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 18:30:29 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v438: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:29 compute-0 ceph-mon[74927]: pgmap v438: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:31 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v439: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:31 compute-0 ceph-mon[74927]: pgmap v439: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:31 compute-0 sudo[137055]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:32 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:30:32 compute-0 sudo[137359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwsgdlavebzfffjvxsgsxkuasmlamzgg ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1764009032.2547014-198-127848953900289/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1764009032.2547014-198-127848953900289/args'
Nov 24 18:30:32 compute-0 sudo[137359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:30:32 compute-0 sudo[137359]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:33 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v440: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:33 compute-0 sudo[137526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvpdhpyzkryxnbdonrwwovauepfnwhrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009033.0176833-209-38504769775373/AnsiballZ_dnf.py'
Nov 24 18:30:33 compute-0 sudo[137526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:30:33 compute-0 python3.9[137528]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 18:30:33 compute-0 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 18:30:33 compute-0 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Cumulative writes: 5370 writes, 23K keys, 5370 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5370 writes, 751 syncs, 7.15 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5370 writes, 23K keys, 5370 commit groups, 1.0 writes per commit group, ingest: 18.36 MB, 0.03 MB/s
                                           Interval WAL: 5370 writes, 751 syncs, 7.15 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 18:30:34 compute-0 ceph-mon[74927]: pgmap v440: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:30:34
Nov 24 18:30:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:30:34 compute-0 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 18:30:34 compute-0 ceph-mgr[75218]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'images', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'vms']
Nov 24 18:30:34 compute-0 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 18:30:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:30:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:30:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:30:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:30:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:30:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:30:34 compute-0 sudo[137526]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:30:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:30:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:30:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:30:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:30:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:30:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:30:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:30:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:30:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:30:35 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v441: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:35 compute-0 sudo[137679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnouotaentdnweofmmmtmqfnannxjqlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009035.0369866-222-38235340589302/AnsiballZ_package_facts.py'
Nov 24 18:30:35 compute-0 sudo[137679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:30:35 compute-0 python3.9[137681]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 24 18:30:36 compute-0 sudo[137679]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:36 compute-0 ceph-mon[74927]: pgmap v441: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:36 compute-0 sudo[137831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpdbuavubqdsmfuxrsfcmkubdxqlinql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009036.6135273-232-175925576196753/AnsiballZ_stat.py'
Nov 24 18:30:36 compute-0 sudo[137831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:30:37 compute-0 python3.9[137833]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:30:37 compute-0 sudo[137831]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:37 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v442: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:37 compute-0 sudo[137909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbqxwshbymbrfelrxtvtdynczagzavla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009036.6135273-232-175925576196753/AnsiballZ_file.py'
Nov 24 18:30:37 compute-0 sudo[137909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:30:37 compute-0 python3.9[137911]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:30:37 compute-0 sudo[137909]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:30:38 compute-0 sudo[138061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nabghqylvufhazrzzpfuaymqzbrhhjqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009037.801392-244-80831979606866/AnsiballZ_stat.py'
Nov 24 18:30:38 compute-0 sudo[138061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:30:38 compute-0 ceph-mon[74927]: pgmap v442: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:38 compute-0 python3.9[138063]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:30:38 compute-0 sudo[138061]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:38 compute-0 sudo[138139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoxbedpccbmkkomoahtbnombsatvfnly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009037.801392-244-80831979606866/AnsiballZ_file.py'
Nov 24 18:30:38 compute-0 sudo[138139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:30:38 compute-0 python3.9[138141]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:30:38 compute-0 sudo[138139]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:39 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v443: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:39 compute-0 ceph-mon[74927]: pgmap v443: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:39 compute-0 sudo[138291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oymacttngvvmxqgoqrsdpcsfumweyobk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009039.254292-262-117008191282499/AnsiballZ_lineinfile.py'
Nov 24 18:30:39 compute-0 sudo[138291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:30:39 compute-0 python3.9[138293]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:30:39 compute-0 sudo[138291]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:40 compute-0 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 18:30:40 compute-0 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Cumulative writes: 6505 writes, 27K keys, 6505 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6505 writes, 1119 syncs, 5.81 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6505 writes, 27K keys, 6505 commit groups, 1.0 writes per commit group, ingest: 19.27 MB, 0.03 MB/s
                                           Interval WAL: 6505 writes, 1119 syncs, 5.81 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.365       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.365       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.36              0.00         1    0.365       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.055       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.055       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.055       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.10              0.00         1    0.095       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.10              0.00         1    0.095       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.10              0.00         1    0.095       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 18:30:40 compute-0 sudo[138443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axkhdvqpuzmseihiwzenvulbgpiitocd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009040.4518251-277-11325255545747/AnsiballZ_setup.py'
Nov 24 18:30:40 compute-0 sudo[138443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:30:41 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v444: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:41 compute-0 python3.9[138445]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 18:30:41 compute-0 sudo[138443]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:42 compute-0 sudo[138527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmcdbkrlomspvqhsimzzaixqdeycvxuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009040.4518251-277-11325255545747/AnsiballZ_systemd.py'
Nov 24 18:30:42 compute-0 sudo[138527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:30:42 compute-0 ceph-mon[74927]: pgmap v444: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:42 compute-0 python3.9[138529]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:30:42 compute-0 sudo[138527]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:30:42 compute-0 sshd-session[134015]: Connection closed by 192.168.122.30 port 33120
Nov 24 18:30:42 compute-0 sshd-session[134012]: pam_unix(sshd:session): session closed for user zuul
Nov 24 18:30:42 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Nov 24 18:30:42 compute-0 systemd[1]: session-40.scope: Consumed 23.279s CPU time.
Nov 24 18:30:42 compute-0 systemd-logind[822]: Session 40 logged out. Waiting for processes to exit.
Nov 24 18:30:42 compute-0 systemd-logind[822]: Removed session 40.
Nov 24 18:30:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:30:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:30:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:30:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:30:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:30:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:30:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:30:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:30:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:30:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:30:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:30:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:30:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 18:30:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:30:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:30:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:30:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 18:30:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:30:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 18:30:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:30:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:30:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:30:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 18:30:43 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v445: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:44 compute-0 ceph-mon[74927]: pgmap v445: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:45 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v446: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:45 compute-0 ceph-mon[74927]: pgmap v446: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:47 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v447: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:30:47 compute-0 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 18:30:47 compute-0 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Cumulative writes: 5482 writes, 23K keys, 5482 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5482 writes, 769 syncs, 7.13 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5482 writes, 23K keys, 5482 commit groups, 1.0 writes per commit group, ingest: 18.33 MB, 0.03 MB/s
                                           Interval WAL: 5482 writes, 769 syncs, 7.13 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.03              0.00         1    0.027       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92a430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92a430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92a430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.019       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 18:30:47 compute-0 sshd-session[138556]: Accepted publickey for zuul from 192.168.122.30 port 50110 ssh2: ECDSA SHA256:jrbRhTO0vQ5011EeK1ZbrK2vK+fcQyIDL9kUBqDHBYY
Nov 24 18:30:47 compute-0 systemd-logind[822]: New session 41 of user zuul.
Nov 24 18:30:47 compute-0 systemd[1]: Started Session 41 of User zuul.
Nov 24 18:30:47 compute-0 sshd-session[138556]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 18:30:48 compute-0 ceph-mon[74927]: pgmap v447: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:48 compute-0 sudo[138709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iodmfycldgkbxmgumkgmfvctfpesnbqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009048.07229-22-253695710574321/AnsiballZ_file.py'
Nov 24 18:30:48 compute-0 sudo[138709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:30:48 compute-0 python3.9[138711]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:30:48 compute-0 sudo[138709]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:49 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v448: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:49 compute-0 sudo[138861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtpfvwihzlylnmgfzdloxrtmtvbphmfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009048.975796-34-175455583316737/AnsiballZ_stat.py'
Nov 24 18:30:49 compute-0 sudo[138861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:30:49 compute-0 python3.9[138863]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:30:49 compute-0 sudo[138861]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:49 compute-0 sudo[138939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxzwunfrqrbmomehpygtejwarlspwpyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009048.975796-34-175455583316737/AnsiballZ_file.py'
Nov 24 18:30:49 compute-0 sudo[138939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:30:50 compute-0 python3.9[138941]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:30:50 compute-0 sudo[138939]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:50 compute-0 ceph-mon[74927]: pgmap v448: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:50 compute-0 sshd-session[138559]: Connection closed by 192.168.122.30 port 50110
Nov 24 18:30:50 compute-0 sshd-session[138556]: pam_unix(sshd:session): session closed for user zuul
Nov 24 18:30:50 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Nov 24 18:30:50 compute-0 systemd[1]: session-41.scope: Consumed 1.575s CPU time.
Nov 24 18:30:50 compute-0 systemd-logind[822]: Session 41 logged out. Waiting for processes to exit.
Nov 24 18:30:50 compute-0 systemd-logind[822]: Removed session 41.
Nov 24 18:30:50 compute-0 ceph-mgr[75218]: [devicehealth INFO root] Check health
Nov 24 18:30:51 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v449: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:51 compute-0 ceph-mon[74927]: pgmap v449: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:30:53 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v450: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:54 compute-0 ceph-mon[74927]: pgmap v450: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:55 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v451: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:56 compute-0 sshd-session[138966]: Accepted publickey for zuul from 192.168.122.30 port 39984 ssh2: ECDSA SHA256:jrbRhTO0vQ5011EeK1ZbrK2vK+fcQyIDL9kUBqDHBYY
Nov 24 18:30:56 compute-0 systemd-logind[822]: New session 42 of user zuul.
Nov 24 18:30:56 compute-0 systemd[1]: Started Session 42 of User zuul.
Nov 24 18:30:56 compute-0 sshd-session[138966]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 18:30:56 compute-0 ceph-mon[74927]: pgmap v451: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:56 compute-0 sudo[139093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:30:56 compute-0 sudo[139093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:30:56 compute-0 sudo[139093]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:56 compute-0 sudo[139145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:30:56 compute-0 sudo[139145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:30:56 compute-0 sudo[139145]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:56 compute-0 sudo[139170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:30:56 compute-0 sudo[139170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:30:56 compute-0 sudo[139170]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:56 compute-0 sudo[139195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 18:30:56 compute-0 sudo[139195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:30:56 compute-0 python3.9[139144]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:30:57 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v452: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:57 compute-0 ceph-mon[74927]: pgmap v452: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:57 compute-0 sudo[139195]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 24 18:30:57 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 18:30:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:30:57 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:30:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:30:57 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:30:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:30:57 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:30:57 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev d2b1027e-8718-432e-b316-1e77165b1930 does not exist
Nov 24 18:30:57 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 289e10f1-1f0c-4ab5-a5e7-3e61a4491c8e does not exist
Nov 24 18:30:57 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 8153771d-e66a-4ebb-89f5-4f1244fe6cc8 does not exist
Nov 24 18:30:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 18:30:57 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:30:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 18:30:57 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:30:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:30:57 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:30:57 compute-0 sudo[139323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:30:57 compute-0 sudo[139323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:30:57 compute-0 sudo[139323]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:57 compute-0 sudo[139356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:30:57 compute-0 sudo[139356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:30:57 compute-0 sudo[139356]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:57 compute-0 sudo[139381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:30:57 compute-0 sudo[139381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:30:57 compute-0 sudo[139381]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:30:57 compute-0 sudo[139406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 18:30:57 compute-0 sudo[139406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:30:57 compute-0 sudo[139529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnvvtkptcbzhvgrcocqflpulyvpkbfwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009057.3602462-33-246504200179239/AnsiballZ_file.py'
Nov 24 18:30:57 compute-0 sudo[139529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:30:57 compute-0 podman[139546]: 2025-11-24 18:30:57.900068328 +0000 UTC m=+0.043686211 container create adbb712220616c2a40fdd2801f46cc0f168a7107f349e84018658227ffd08e2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldberg, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:30:57 compute-0 systemd[1]: Started libpod-conmon-adbb712220616c2a40fdd2801f46cc0f168a7107f349e84018658227ffd08e2e.scope.
Nov 24 18:30:57 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:30:57 compute-0 podman[139546]: 2025-11-24 18:30:57.877721401 +0000 UTC m=+0.021339314 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:30:57 compute-0 podman[139546]: 2025-11-24 18:30:57.976150307 +0000 UTC m=+0.119768210 container init adbb712220616c2a40fdd2801f46cc0f168a7107f349e84018658227ffd08e2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 24 18:30:57 compute-0 podman[139546]: 2025-11-24 18:30:57.982399013 +0000 UTC m=+0.126016896 container start adbb712220616c2a40fdd2801f46cc0f168a7107f349e84018658227ffd08e2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldberg, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:30:57 compute-0 podman[139546]: 2025-11-24 18:30:57.985357017 +0000 UTC m=+0.128974920 container attach adbb712220616c2a40fdd2801f46cc0f168a7107f349e84018658227ffd08e2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldberg, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Nov 24 18:30:57 compute-0 nostalgic_goldberg[139562]: 167 167
Nov 24 18:30:57 compute-0 podman[139546]: 2025-11-24 18:30:57.987641124 +0000 UTC m=+0.131259007 container died adbb712220616c2a40fdd2801f46cc0f168a7107f349e84018658227ffd08e2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldberg, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:30:57 compute-0 systemd[1]: libpod-adbb712220616c2a40fdd2801f46cc0f168a7107f349e84018658227ffd08e2e.scope: Deactivated successfully.
Nov 24 18:30:57 compute-0 python3.9[139533]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:30:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-90b88c6011712b774490dbb6453be89635a5ef14700ee4423e1f8b83c267c9ab-merged.mount: Deactivated successfully.
Nov 24 18:30:58 compute-0 sudo[139529]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:58 compute-0 podman[139546]: 2025-11-24 18:30:58.024816162 +0000 UTC m=+0.168434055 container remove adbb712220616c2a40fdd2801f46cc0f168a7107f349e84018658227ffd08e2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldberg, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:30:58 compute-0 systemd[1]: libpod-conmon-adbb712220616c2a40fdd2801f46cc0f168a7107f349e84018658227ffd08e2e.scope: Deactivated successfully.
Nov 24 18:30:58 compute-0 podman[139610]: 2025-11-24 18:30:58.162821967 +0000 UTC m=+0.038188985 container create f35951e018b651100e1becc31164c599530e0b436f1aca3c587a1d4a7a3b7022 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 18:30:58 compute-0 systemd[1]: Started libpod-conmon-f35951e018b651100e1becc31164c599530e0b436f1aca3c587a1d4a7a3b7022.scope.
Nov 24 18:30:58 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:30:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce1d7e3f537a0ef0e58264ef8dc05bb95ac90c6c3fa15b7a4bed2d1cd196542c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:30:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce1d7e3f537a0ef0e58264ef8dc05bb95ac90c6c3fa15b7a4bed2d1cd196542c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:30:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce1d7e3f537a0ef0e58264ef8dc05bb95ac90c6c3fa15b7a4bed2d1cd196542c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:30:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce1d7e3f537a0ef0e58264ef8dc05bb95ac90c6c3fa15b7a4bed2d1cd196542c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:30:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce1d7e3f537a0ef0e58264ef8dc05bb95ac90c6c3fa15b7a4bed2d1cd196542c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:30:58 compute-0 podman[139610]: 2025-11-24 18:30:58.14854435 +0000 UTC m=+0.023911378 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:30:58 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 18:30:58 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:30:58 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:30:58 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:30:58 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:30:58 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:30:58 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:30:58 compute-0 podman[139610]: 2025-11-24 18:30:58.725218664 +0000 UTC m=+0.600585702 container init f35951e018b651100e1becc31164c599530e0b436f1aca3c587a1d4a7a3b7022 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 18:30:58 compute-0 podman[139610]: 2025-11-24 18:30:58.735756387 +0000 UTC m=+0.611123395 container start f35951e018b651100e1becc31164c599530e0b436f1aca3c587a1d4a7a3b7022 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:30:58 compute-0 podman[139610]: 2025-11-24 18:30:58.738509735 +0000 UTC m=+0.613876743 container attach f35951e018b651100e1becc31164c599530e0b436f1aca3c587a1d4a7a3b7022 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jones, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 18:30:59 compute-0 sudo[139779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxzjcwzlngembipjsxtholdtkandfvpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009058.1749141-41-120774907242428/AnsiballZ_stat.py'
Nov 24 18:30:59 compute-0 sudo[139779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:30:59 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v453: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:59 compute-0 python3.9[139781]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:30:59 compute-0 sudo[139779]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:59 compute-0 sudo[139869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iheiqmqywkykmzslnausjuwhmofcuxzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009058.1749141-41-120774907242428/AnsiballZ_file.py'
Nov 24 18:30:59 compute-0 sudo[139869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:30:59 compute-0 ceph-mon[74927]: pgmap v453: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:30:59 compute-0 python3.9[139872]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.i74xqfct recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:30:59 compute-0 sudo[139869]: pam_unix(sudo:session): session closed for user root
Nov 24 18:30:59 compute-0 focused_jones[139666]: --> passed data devices: 0 physical, 3 LVM
Nov 24 18:30:59 compute-0 focused_jones[139666]: --> relative data size: 1.0
Nov 24 18:30:59 compute-0 focused_jones[139666]: --> All data devices are unavailable
Nov 24 18:30:59 compute-0 systemd[1]: libpod-f35951e018b651100e1becc31164c599530e0b436f1aca3c587a1d4a7a3b7022.scope: Deactivated successfully.
Nov 24 18:30:59 compute-0 systemd[1]: libpod-f35951e018b651100e1becc31164c599530e0b436f1aca3c587a1d4a7a3b7022.scope: Consumed 1.078s CPU time.
Nov 24 18:30:59 compute-0 podman[139610]: 2025-11-24 18:30:59.870298144 +0000 UTC m=+1.745665152 container died f35951e018b651100e1becc31164c599530e0b436f1aca3c587a1d4a7a3b7022 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 24 18:30:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce1d7e3f537a0ef0e58264ef8dc05bb95ac90c6c3fa15b7a4bed2d1cd196542c-merged.mount: Deactivated successfully.
Nov 24 18:30:59 compute-0 podman[139610]: 2025-11-24 18:30:59.931982174 +0000 UTC m=+1.807349182 container remove f35951e018b651100e1becc31164c599530e0b436f1aca3c587a1d4a7a3b7022 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:30:59 compute-0 systemd[1]: libpod-conmon-f35951e018b651100e1becc31164c599530e0b436f1aca3c587a1d4a7a3b7022.scope: Deactivated successfully.
Nov 24 18:30:59 compute-0 sudo[139406]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:00 compute-0 sudo[139920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:31:00 compute-0 sudo[139920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:31:00 compute-0 sudo[139920]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:00 compute-0 sudo[139945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:31:00 compute-0 sudo[139945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:31:00 compute-0 sudo[139945]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:00 compute-0 sudo[139970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:31:00 compute-0 sudo[139970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:31:00 compute-0 sudo[139970]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:00 compute-0 sudo[140022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- lvm list --format json
Nov 24 18:31:00 compute-0 sudo[140022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:31:00 compute-0 sudo[140164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txakmndcaqcgmwwxljoaqxgyeqsonrqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009060.1526875-61-44099978221399/AnsiballZ_stat.py'
Nov 24 18:31:00 compute-0 sudo[140164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:00 compute-0 podman[140187]: 2025-11-24 18:31:00.557019745 +0000 UTC m=+0.043917037 container create a17a5c225cb80a5b2f66d5551e5fbcc4fd7461df4f85cd47d7797b19211a245d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wing, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:31:00 compute-0 systemd[1]: Started libpod-conmon-a17a5c225cb80a5b2f66d5551e5fbcc4fd7461df4f85cd47d7797b19211a245d.scope.
Nov 24 18:31:00 compute-0 python3.9[140172]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:31:00 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:31:00 compute-0 podman[140187]: 2025-11-24 18:31:00.536453152 +0000 UTC m=+0.023350474 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:31:00 compute-0 podman[140187]: 2025-11-24 18:31:00.644586171 +0000 UTC m=+0.131483503 container init a17a5c225cb80a5b2f66d5551e5fbcc4fd7461df4f85cd47d7797b19211a245d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 24 18:31:00 compute-0 podman[140187]: 2025-11-24 18:31:00.652364825 +0000 UTC m=+0.139262117 container start a17a5c225cb80a5b2f66d5551e5fbcc4fd7461df4f85cd47d7797b19211a245d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wing, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:31:00 compute-0 sudo[140164]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:00 compute-0 podman[140187]: 2025-11-24 18:31:00.656865917 +0000 UTC m=+0.143763209 container attach a17a5c225cb80a5b2f66d5551e5fbcc4fd7461df4f85cd47d7797b19211a245d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 18:31:00 compute-0 distracted_wing[140204]: 167 167
Nov 24 18:31:00 compute-0 systemd[1]: libpod-a17a5c225cb80a5b2f66d5551e5fbcc4fd7461df4f85cd47d7797b19211a245d.scope: Deactivated successfully.
Nov 24 18:31:00 compute-0 podman[140187]: 2025-11-24 18:31:00.660725534 +0000 UTC m=+0.147622826 container died a17a5c225cb80a5b2f66d5551e5fbcc4fd7461df4f85cd47d7797b19211a245d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wing, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:31:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-37686dea258588001ae4378d5ac6c5c49d563ae711dbff5d8db09f949af1994e-merged.mount: Deactivated successfully.
Nov 24 18:31:00 compute-0 podman[140187]: 2025-11-24 18:31:00.696079026 +0000 UTC m=+0.182976308 container remove a17a5c225cb80a5b2f66d5551e5fbcc4fd7461df4f85cd47d7797b19211a245d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 24 18:31:00 compute-0 systemd[1]: libpod-conmon-a17a5c225cb80a5b2f66d5551e5fbcc4fd7461df4f85cd47d7797b19211a245d.scope: Deactivated successfully.
Nov 24 18:31:00 compute-0 podman[140251]: 2025-11-24 18:31:00.898154398 +0000 UTC m=+0.065436404 container create 4bb1a1a5937cde129c18dbcb6f38645e218251903d7d1667444739e716ceedad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hoover, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:31:00 compute-0 systemd[1]: Started libpod-conmon-4bb1a1a5937cde129c18dbcb6f38645e218251903d7d1667444739e716ceedad.scope.
Nov 24 18:31:00 compute-0 podman[140251]: 2025-11-24 18:31:00.865798081 +0000 UTC m=+0.033080147 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:31:00 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:31:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d262533aff7538113ede6d5bf0122027e74f5ec89768910bdab2c5ff6e41b6d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:31:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d262533aff7538113ede6d5bf0122027e74f5ec89768910bdab2c5ff6e41b6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:31:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d262533aff7538113ede6d5bf0122027e74f5ec89768910bdab2c5ff6e41b6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:31:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d262533aff7538113ede6d5bf0122027e74f5ec89768910bdab2c5ff6e41b6d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:31:01 compute-0 podman[140251]: 2025-11-24 18:31:01.015368144 +0000 UTC m=+0.182650140 container init 4bb1a1a5937cde129c18dbcb6f38645e218251903d7d1667444739e716ceedad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hoover, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 24 18:31:01 compute-0 podman[140251]: 2025-11-24 18:31:01.028188564 +0000 UTC m=+0.195470570 container start 4bb1a1a5937cde129c18dbcb6f38645e218251903d7d1667444739e716ceedad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hoover, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:31:01 compute-0 podman[140251]: 2025-11-24 18:31:01.032663385 +0000 UTC m=+0.199945441 container attach 4bb1a1a5937cde129c18dbcb6f38645e218251903d7d1667444739e716ceedad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 18:31:01 compute-0 sudo[140322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umeowwoxafcjkxtwklpkpfurhdrkfxez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009060.1526875-61-44099978221399/AnsiballZ_file.py'
Nov 24 18:31:01 compute-0 sudo[140322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:01 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v454: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:01 compute-0 python3.9[140325]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=._98jmdwk recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:31:01 compute-0 sudo[140322]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:01 compute-0 sudo[140479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-youvuvjstpdcyefvhqfpjjlcvczdpmyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009061.510742-74-253651779629095/AnsiballZ_file.py'
Nov 24 18:31:01 compute-0 sudo[140479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:01 compute-0 frosty_hoover[140292]: {
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:     "0": [
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:         {
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:             "devices": [
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:                 "/dev/loop3"
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:             ],
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:             "lv_name": "ceph_lv0",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:             "lv_size": "21470642176",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:             "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:             "name": "ceph_lv0",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:             "tags": {
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:                 "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:                 "ceph.cluster_name": "ceph",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:                 "ceph.crush_device_class": "",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:                 "ceph.encrypted": "0",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:                 "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:                 "ceph.osd_id": "0",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:                 "ceph.type": "block",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:                 "ceph.vdo": "0"
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:             },
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:             "type": "block",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:             "vg_name": "ceph_vg0"
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:         }
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:     ],
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:     "1": [
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:         {
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:             "devices": [
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:                 "/dev/loop4"
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:             ],
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:             "lv_name": "ceph_lv1",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:             "lv_size": "21470642176",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:             "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:             "name": "ceph_lv1",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:             "tags": {
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:                 "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:                 "ceph.cluster_name": "ceph",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:                 "ceph.crush_device_class": "",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:                 "ceph.encrypted": "0",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:                 "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:                 "ceph.osd_id": "1",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:                 "ceph.type": "block",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:                 "ceph.vdo": "0"
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:             },
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:             "type": "block",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:             "vg_name": "ceph_vg1"
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:         }
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:     ],
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:     "2": [
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:         {
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:             "devices": [
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:                 "/dev/loop5"
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:             ],
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:             "lv_name": "ceph_lv2",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:             "lv_size": "21470642176",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:             "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:             "name": "ceph_lv2",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:             "tags": {
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:                 "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:                 "ceph.cluster_name": "ceph",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:                 "ceph.crush_device_class": "",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:                 "ceph.encrypted": "0",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:                 "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:                 "ceph.osd_id": "2",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:                 "ceph.type": "block",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:                 "ceph.vdo": "0"
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:             },
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:             "type": "block",
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:             "vg_name": "ceph_vg2"
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:         }
Nov 24 18:31:01 compute-0 frosty_hoover[140292]:     ]
Nov 24 18:31:01 compute-0 frosty_hoover[140292]: }
Nov 24 18:31:01 compute-0 systemd[1]: libpod-4bb1a1a5937cde129c18dbcb6f38645e218251903d7d1667444739e716ceedad.scope: Deactivated successfully.
Nov 24 18:31:01 compute-0 podman[140251]: 2025-11-24 18:31:01.833234828 +0000 UTC m=+1.000516794 container died 4bb1a1a5937cde129c18dbcb6f38645e218251903d7d1667444739e716ceedad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hoover, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 18:31:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d262533aff7538113ede6d5bf0122027e74f5ec89768910bdab2c5ff6e41b6d-merged.mount: Deactivated successfully.
Nov 24 18:31:01 compute-0 podman[140251]: 2025-11-24 18:31:01.89979843 +0000 UTC m=+1.067080416 container remove 4bb1a1a5937cde129c18dbcb6f38645e218251903d7d1667444739e716ceedad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hoover, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:31:01 compute-0 systemd[1]: libpod-conmon-4bb1a1a5937cde129c18dbcb6f38645e218251903d7d1667444739e716ceedad.scope: Deactivated successfully.
Nov 24 18:31:01 compute-0 sudo[140022]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:01 compute-0 sudo[140493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:31:01 compute-0 python3.9[140481]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:31:01 compute-0 sudo[140493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:31:02 compute-0 sudo[140493]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:02 compute-0 sudo[140479]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:02 compute-0 sudo[140518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:31:02 compute-0 sudo[140518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:31:02 compute-0 sudo[140518]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:02 compute-0 sudo[140564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:31:02 compute-0 sudo[140564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:31:02 compute-0 sudo[140564]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:02 compute-0 sudo[140592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- raw list --format json
Nov 24 18:31:02 compute-0 sudo[140592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:31:02 compute-0 ceph-mon[74927]: pgmap v454: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:02 compute-0 sudo[140781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brprsiaojolrpuscjrhqfnwwzauqjylc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009062.18263-82-20911189463283/AnsiballZ_stat.py'
Nov 24 18:31:02 compute-0 sudo[140781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:02 compute-0 podman[140783]: 2025-11-24 18:31:02.538206145 +0000 UTC m=+0.038434591 container create 83ec3da4050d90080e5d154dd89a1e0cb9f4fe62c7f7ea51ec67cd23c0202b39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 18:31:02 compute-0 systemd[1]: Started libpod-conmon-83ec3da4050d90080e5d154dd89a1e0cb9f4fe62c7f7ea51ec67cd23c0202b39.scope.
Nov 24 18:31:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:31:02 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:31:02 compute-0 podman[140783]: 2025-11-24 18:31:02.603395162 +0000 UTC m=+0.103623658 container init 83ec3da4050d90080e5d154dd89a1e0cb9f4fe62c7f7ea51ec67cd23c0202b39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_cohen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 18:31:02 compute-0 podman[140783]: 2025-11-24 18:31:02.611056713 +0000 UTC m=+0.111285169 container start 83ec3da4050d90080e5d154dd89a1e0cb9f4fe62c7f7ea51ec67cd23c0202b39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_cohen, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 24 18:31:02 compute-0 podman[140783]: 2025-11-24 18:31:02.614303514 +0000 UTC m=+0.114531960 container attach 83ec3da4050d90080e5d154dd89a1e0cb9f4fe62c7f7ea51ec67cd23c0202b39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_cohen, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 18:31:02 compute-0 condescending_cohen[140802]: 167 167
Nov 24 18:31:02 compute-0 podman[140783]: 2025-11-24 18:31:02.616887298 +0000 UTC m=+0.117115744 container died 83ec3da4050d90080e5d154dd89a1e0cb9f4fe62c7f7ea51ec67cd23c0202b39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_cohen, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 18:31:02 compute-0 systemd[1]: libpod-83ec3da4050d90080e5d154dd89a1e0cb9f4fe62c7f7ea51ec67cd23c0202b39.scope: Deactivated successfully.
Nov 24 18:31:02 compute-0 podman[140783]: 2025-11-24 18:31:02.524070702 +0000 UTC m=+0.024299148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:31:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-556f34b91357e37d3ab2601bf39ea4f366557d39c408173cafb9f472636f6554-merged.mount: Deactivated successfully.
Nov 24 18:31:02 compute-0 podman[140783]: 2025-11-24 18:31:02.652326143 +0000 UTC m=+0.152554609 container remove 83ec3da4050d90080e5d154dd89a1e0cb9f4fe62c7f7ea51ec67cd23c0202b39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_cohen, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:31:02 compute-0 systemd[1]: libpod-conmon-83ec3da4050d90080e5d154dd89a1e0cb9f4fe62c7f7ea51ec67cd23c0202b39.scope: Deactivated successfully.
Nov 24 18:31:02 compute-0 python3.9[140785]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:31:02 compute-0 sudo[140781]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:02 compute-0 podman[140851]: 2025-11-24 18:31:02.843070134 +0000 UTC m=+0.060890601 container create 80af0449cd79304e415a2e4ad542304ce45fce7b71dd99c7b1ad7e5ff282886f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_elgamal, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:31:02 compute-0 systemd[1]: Started libpod-conmon-80af0449cd79304e415a2e4ad542304ce45fce7b71dd99c7b1ad7e5ff282886f.scope.
Nov 24 18:31:02 compute-0 podman[140851]: 2025-11-24 18:31:02.821475945 +0000 UTC m=+0.039296452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:31:02 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:31:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/966f9ac3c9f237f8af43dcb4ab0086daf738a0fa7dbee13ee2d4089392322108/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:31:02 compute-0 sudo[140921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smxllxheygqcszececynjxbwkqgvrbej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009062.18263-82-20911189463283/AnsiballZ_file.py'
Nov 24 18:31:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/966f9ac3c9f237f8af43dcb4ab0086daf738a0fa7dbee13ee2d4089392322108/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:31:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/966f9ac3c9f237f8af43dcb4ab0086daf738a0fa7dbee13ee2d4089392322108/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:31:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/966f9ac3c9f237f8af43dcb4ab0086daf738a0fa7dbee13ee2d4089392322108/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:31:02 compute-0 sudo[140921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:02 compute-0 podman[140851]: 2025-11-24 18:31:02.939256235 +0000 UTC m=+0.157076732 container init 80af0449cd79304e415a2e4ad542304ce45fce7b71dd99c7b1ad7e5ff282886f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_elgamal, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:31:02 compute-0 podman[140851]: 2025-11-24 18:31:02.947101791 +0000 UTC m=+0.164922248 container start 80af0449cd79304e415a2e4ad542304ce45fce7b71dd99c7b1ad7e5ff282886f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 18:31:02 compute-0 podman[140851]: 2025-11-24 18:31:02.949994483 +0000 UTC m=+0.167814940 container attach 80af0449cd79304e415a2e4ad542304ce45fce7b71dd99c7b1ad7e5ff282886f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 18:31:03 compute-0 python3.9[140923]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:31:03 compute-0 sudo[140921]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:03 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v455: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:03 compute-0 sudo[141077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jozikcqymeelushwpogdkxgcpgipjwiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009063.2847528-82-87461850494409/AnsiballZ_stat.py'
Nov 24 18:31:03 compute-0 sudo[141077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:03 compute-0 python3.9[141081]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:31:03 compute-0 sudo[141077]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:03 compute-0 upbeat_elgamal[140912]: {
Nov 24 18:31:03 compute-0 upbeat_elgamal[140912]:     "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 18:31:03 compute-0 upbeat_elgamal[140912]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:31:03 compute-0 upbeat_elgamal[140912]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 18:31:03 compute-0 upbeat_elgamal[140912]:         "osd_id": 0,
Nov 24 18:31:03 compute-0 upbeat_elgamal[140912]:         "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:31:03 compute-0 upbeat_elgamal[140912]:         "type": "bluestore"
Nov 24 18:31:03 compute-0 upbeat_elgamal[140912]:     },
Nov 24 18:31:03 compute-0 upbeat_elgamal[140912]:     "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 18:31:03 compute-0 upbeat_elgamal[140912]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:31:03 compute-0 upbeat_elgamal[140912]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 18:31:03 compute-0 upbeat_elgamal[140912]:         "osd_id": 1,
Nov 24 18:31:03 compute-0 upbeat_elgamal[140912]:         "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:31:03 compute-0 upbeat_elgamal[140912]:         "type": "bluestore"
Nov 24 18:31:03 compute-0 upbeat_elgamal[140912]:     },
Nov 24 18:31:03 compute-0 upbeat_elgamal[140912]:     "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 18:31:03 compute-0 upbeat_elgamal[140912]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:31:03 compute-0 upbeat_elgamal[140912]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 18:31:03 compute-0 upbeat_elgamal[140912]:         "osd_id": 2,
Nov 24 18:31:03 compute-0 upbeat_elgamal[140912]:         "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:31:03 compute-0 upbeat_elgamal[140912]:         "type": "bluestore"
Nov 24 18:31:03 compute-0 upbeat_elgamal[140912]:     }
Nov 24 18:31:03 compute-0 upbeat_elgamal[140912]: }
Nov 24 18:31:03 compute-0 systemd[1]: libpod-80af0449cd79304e415a2e4ad542304ce45fce7b71dd99c7b1ad7e5ff282886f.scope: Deactivated successfully.
Nov 24 18:31:03 compute-0 podman[140851]: 2025-11-24 18:31:03.888238191 +0000 UTC m=+1.106058648 container died 80af0449cd79304e415a2e4ad542304ce45fce7b71dd99c7b1ad7e5ff282886f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 18:31:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-966f9ac3c9f237f8af43dcb4ab0086daf738a0fa7dbee13ee2d4089392322108-merged.mount: Deactivated successfully.
Nov 24 18:31:04 compute-0 sudo[141195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucfsnebugkhqtrikzoyofsbfzinmobzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009063.2847528-82-87461850494409/AnsiballZ_file.py'
Nov 24 18:31:04 compute-0 sudo[141195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:04 compute-0 podman[140851]: 2025-11-24 18:31:04.053579318 +0000 UTC m=+1.271399765 container remove 80af0449cd79304e415a2e4ad542304ce45fce7b71dd99c7b1ad7e5ff282886f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:31:04 compute-0 systemd[1]: libpod-conmon-80af0449cd79304e415a2e4ad542304ce45fce7b71dd99c7b1ad7e5ff282886f.scope: Deactivated successfully.
Nov 24 18:31:04 compute-0 sudo[140592]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:04 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:31:04 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:31:04 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:31:04 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:31:04 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 3350fb55-2a0b-44ee-8516-716208011328 does not exist
Nov 24 18:31:04 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 53e0ba9e-374f-4eb9-9dd4-152504a62ee9 does not exist
Nov 24 18:31:04 compute-0 sudo[141198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:31:04 compute-0 sudo[141198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:31:04 compute-0 sudo[141198]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:04 compute-0 sudo[141223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:31:04 compute-0 sudo[141223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:31:04 compute-0 sudo[141223]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:04 compute-0 python3.9[141197]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:31:04 compute-0 sudo[141195]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:04 compute-0 ceph-mon[74927]: pgmap v455: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:04 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:31:04 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:31:04 compute-0 sudo[141397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urlpuisidarlxlwqeynhbtfxcmsbsubg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009064.3871996-105-210273870408602/AnsiballZ_file.py'
Nov 24 18:31:04 compute-0 sudo[141397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:31:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:31:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:31:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:31:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:31:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:31:04 compute-0 python3.9[141399]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:31:04 compute-0 sudo[141397]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:05 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v456: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:05 compute-0 ceph-mon[74927]: pgmap v456: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:05 compute-0 sudo[141549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unhjgawdfvierrkfeaquldnninefjwfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009065.2115161-113-278335096800047/AnsiballZ_stat.py'
Nov 24 18:31:05 compute-0 sudo[141549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:05 compute-0 python3.9[141551]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:31:05 compute-0 sudo[141549]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:05 compute-0 sudo[141627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckxzvhcdrdmyyiqujbcokksdvlwdgxfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009065.2115161-113-278335096800047/AnsiballZ_file.py'
Nov 24 18:31:05 compute-0 sudo[141627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:06 compute-0 python3.9[141629]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:31:06 compute-0 sudo[141627]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:06 compute-0 sudo[141779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvbyvvqtomppyojwzkigdzyejyzlrkzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009066.332754-125-198888247826759/AnsiballZ_stat.py'
Nov 24 18:31:06 compute-0 sudo[141779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:06 compute-0 python3.9[141781]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:31:06 compute-0 sudo[141779]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:07 compute-0 sudo[141857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuzytppotjouhkuavaleaxmtafunviey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009066.332754-125-198888247826759/AnsiballZ_file.py'
Nov 24 18:31:07 compute-0 sudo[141857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:07 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v457: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:07 compute-0 python3.9[141859]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:31:07 compute-0 sudo[141857]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:31:07 compute-0 sudo[142009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzsmwkennebmfytdcqnewkuecmcddpiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009067.3932328-137-155922451191354/AnsiballZ_systemd.py'
Nov 24 18:31:07 compute-0 sudo[142009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:08 compute-0 ceph-mon[74927]: pgmap v457: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:08 compute-0 python3.9[142011]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:31:08 compute-0 systemd[1]: Reloading.
Nov 24 18:31:08 compute-0 systemd-rc-local-generator[142034]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:31:08 compute-0 systemd-sysv-generator[142039]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:31:08 compute-0 sudo[142009]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:09 compute-0 sudo[142197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srfljfpavnlmbauesbctuuvgdoikfkny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009068.7743747-145-132489445111527/AnsiballZ_stat.py'
Nov 24 18:31:09 compute-0 sudo[142197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:09 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v458: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:09 compute-0 python3.9[142199]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:31:09 compute-0 sudo[142197]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:09 compute-0 sudo[142275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tplxklisqvnxkkanvqbehfqagzixgveh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009068.7743747-145-132489445111527/AnsiballZ_file.py'
Nov 24 18:31:09 compute-0 sudo[142275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:09 compute-0 python3.9[142277]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:31:09 compute-0 sudo[142275]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:10 compute-0 sudo[142427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nchovjamcrvyblroasgiaxrsanqhrekf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009069.9040027-157-205635182031240/AnsiballZ_stat.py'
Nov 24 18:31:10 compute-0 sudo[142427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:10 compute-0 ceph-mon[74927]: pgmap v458: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:10 compute-0 python3.9[142429]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:31:10 compute-0 sudo[142427]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:10 compute-0 sudo[142505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfbfhsqbducockgsirnwyxvcmalsuoer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009069.9040027-157-205635182031240/AnsiballZ_file.py'
Nov 24 18:31:10 compute-0 sudo[142505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:10 compute-0 python3.9[142507]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:31:10 compute-0 sudo[142505]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:11 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v459: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:11 compute-0 sudo[142657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdeyaswayblpvdpqcemzwbkztrpusyvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009070.995165-169-166419679330128/AnsiballZ_systemd.py'
Nov 24 18:31:11 compute-0 sudo[142657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:11 compute-0 ceph-mon[74927]: pgmap v459: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:11 compute-0 python3.9[142659]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:31:11 compute-0 systemd[1]: Reloading.
Nov 24 18:31:11 compute-0 systemd-sysv-generator[142687]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:31:11 compute-0 systemd-rc-local-generator[142682]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:31:11 compute-0 systemd[1]: Starting Create netns directory...
Nov 24 18:31:11 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 24 18:31:11 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 24 18:31:11 compute-0 systemd[1]: Finished Create netns directory.
Nov 24 18:31:12 compute-0 sudo[142657]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:31:12 compute-0 python3.9[142851]: ansible-ansible.builtin.service_facts Invoked
Nov 24 18:31:12 compute-0 network[142868]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 18:31:12 compute-0 network[142869]: 'network-scripts' will be removed from distribution in near future.
Nov 24 18:31:12 compute-0 network[142870]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 18:31:13 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v460: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:14 compute-0 ceph-mon[74927]: pgmap v460: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:15 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v461: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:16 compute-0 ceph-mon[74927]: pgmap v461: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:16 compute-0 sudo[143130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkybhiabhqumthsbjmmdfnkjvytczpaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009075.9538534-195-146247416011052/AnsiballZ_stat.py'
Nov 24 18:31:16 compute-0 sudo[143130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:16 compute-0 python3.9[143132]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:31:16 compute-0 sudo[143130]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:16 compute-0 sudo[143208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbwpxzcnusrmehpfvzicrpdciywxuxoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009075.9538534-195-146247416011052/AnsiballZ_file.py'
Nov 24 18:31:16 compute-0 sudo[143208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:16 compute-0 python3.9[143210]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:31:16 compute-0 sudo[143208]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:17 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v462: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:17 compute-0 ceph-mon[74927]: pgmap v462: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:17 compute-0 sudo[143360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnymrubspquyjhdicuqafxqlmqldqliz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009077.1955285-208-136027961907686/AnsiballZ_file.py'
Nov 24 18:31:17 compute-0 sudo[143360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:31:17 compute-0 python3.9[143362]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:31:17 compute-0 sudo[143360]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:18 compute-0 sudo[143512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sijevgujzjnrberalihlgmzvgliskart ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009077.8719428-216-138523963763535/AnsiballZ_stat.py'
Nov 24 18:31:18 compute-0 sudo[143512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:18 compute-0 python3.9[143514]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:31:18 compute-0 sudo[143512]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:18 compute-0 sudo[143590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezfbwepshtpmaqxpbbonlmccwzfbbrne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009077.8719428-216-138523963763535/AnsiballZ_file.py'
Nov 24 18:31:18 compute-0 sudo[143590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:18 compute-0 python3.9[143592]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:31:18 compute-0 sudo[143590]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:19 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v463: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:19 compute-0 sudo[143742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdfhxhuqvwizmlgnkxjpnyfuuetjombf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009078.987069-231-65517958997991/AnsiballZ_timezone.py'
Nov 24 18:31:19 compute-0 sudo[143742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:19 compute-0 python3.9[143744]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 24 18:31:19 compute-0 systemd[1]: Starting Time & Date Service...
Nov 24 18:31:19 compute-0 systemd[1]: Started Time & Date Service.
Nov 24 18:31:19 compute-0 sudo[143742]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:20 compute-0 ceph-mon[74927]: pgmap v463: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:20 compute-0 sudo[143898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxdmzztdmbrczwuofexgdlplroezpduc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009079.9794242-240-56967372643738/AnsiballZ_file.py'
Nov 24 18:31:20 compute-0 sudo[143898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:20 compute-0 python3.9[143900]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:31:20 compute-0 sudo[143898]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:21 compute-0 sudo[144050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwsadzcvbmpgycouzhvnjybhnxdkldov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009080.7151606-248-165780137841636/AnsiballZ_stat.py'
Nov 24 18:31:21 compute-0 sudo[144050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:21 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v464: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:21 compute-0 python3.9[144052]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:31:21 compute-0 ceph-mon[74927]: pgmap v464: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:21 compute-0 sudo[144050]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:21 compute-0 sudo[144128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adtamttuafpzrugudxztvkmsrquaybvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009080.7151606-248-165780137841636/AnsiballZ_file.py'
Nov 24 18:31:21 compute-0 sudo[144128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:21 compute-0 python3.9[144130]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:31:21 compute-0 sudo[144128]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:22 compute-0 sudo[144280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhrudkucdjsxrazlowfwgjjdkvismuop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009081.8433695-260-20151790715285/AnsiballZ_stat.py'
Nov 24 18:31:22 compute-0 sudo[144280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:22 compute-0 python3.9[144282]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:31:22 compute-0 sudo[144280]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:22 compute-0 sudo[144358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjopiisgcsfbamvavefyuyzxluqglwbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009081.8433695-260-20151790715285/AnsiballZ_file.py'
Nov 24 18:31:22 compute-0 sudo[144358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:31:22 compute-0 python3.9[144360]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.ggb8ahrg recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:31:22 compute-0 sudo[144358]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:23 compute-0 sudo[144510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixkyqdpjzuhpthcmdovmlrsorpiqgeoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009082.835067-272-178878846125476/AnsiballZ_stat.py'
Nov 24 18:31:23 compute-0 sudo[144510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:23 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v465: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:23 compute-0 python3.9[144512]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:31:23 compute-0 sudo[144510]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:23 compute-0 sudo[144588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvcwczinsrzkjdehrbvvtpfcmvhokzqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009082.835067-272-178878846125476/AnsiballZ_file.py'
Nov 24 18:31:23 compute-0 sudo[144588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:23 compute-0 python3.9[144590]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:31:23 compute-0 sudo[144588]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:24 compute-0 ceph-mon[74927]: pgmap v465: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:24 compute-0 sudo[144740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxfqmpewchttjmwfuueahrggsqmziddc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009084.0707324-285-16110133531381/AnsiballZ_command.py'
Nov 24 18:31:24 compute-0 sudo[144740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:24 compute-0 python3.9[144742]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:31:24 compute-0 sudo[144740]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:25 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v466: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:25 compute-0 ceph-mon[74927]: pgmap v466: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:25 compute-0 sudo[144893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnddwyheuqzsjceraktyxagtnibphavp ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764009084.9806004-293-214739773674832/AnsiballZ_edpm_nftables_from_files.py'
Nov 24 18:31:25 compute-0 sudo[144893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:25 compute-0 python3[144895]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 24 18:31:25 compute-0 sudo[144893]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:26 compute-0 sudo[145045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyezqajmrdowcwvgjsxoxlmfemmdbiwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009085.9388232-301-146058077656410/AnsiballZ_stat.py'
Nov 24 18:31:26 compute-0 sudo[145045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:26 compute-0 python3.9[145047]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:31:26 compute-0 sudo[145045]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:26 compute-0 sudo[145123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dslvgmgwnylfqsenixxvsesmzxbmrqdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009085.9388232-301-146058077656410/AnsiballZ_file.py'
Nov 24 18:31:26 compute-0 sudo[145123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:26 compute-0 python3.9[145125]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:31:26 compute-0 sudo[145123]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:27 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v467: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:27 compute-0 sudo[145275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvcevivaxxhagipqbcpqdakbvcnfqdxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009087.052338-313-351644870439/AnsiballZ_stat.py'
Nov 24 18:31:27 compute-0 sudo[145275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:27 compute-0 python3.9[145277]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:31:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:31:27 compute-0 sudo[145275]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:27 compute-0 sudo[145353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esgbcrsmhglciskfzzgnfdhlklnhsita ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009087.052338-313-351644870439/AnsiballZ_file.py'
Nov 24 18:31:27 compute-0 sudo[145353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:27 compute-0 python3.9[145355]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:31:28 compute-0 sudo[145353]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:28 compute-0 ceph-mon[74927]: pgmap v467: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:28 compute-0 sudo[145505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkhkvnhbcqzednnosneotihoogowtyfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009088.1859798-325-149702365328804/AnsiballZ_stat.py'
Nov 24 18:31:28 compute-0 sudo[145505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:28 compute-0 python3.9[145507]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:31:28 compute-0 sudo[145505]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:28 compute-0 sudo[145583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzzukcjunkoespjbytunhabaqgiscdso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009088.1859798-325-149702365328804/AnsiballZ_file.py'
Nov 24 18:31:28 compute-0 sudo[145583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:29 compute-0 python3.9[145585]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:31:29 compute-0 sudo[145583]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:29 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v468: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:29 compute-0 ceph-mon[74927]: pgmap v468: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:29 compute-0 sudo[145735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuzsxbrofcrienlbpdjgpnxcexqelmhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009089.2764528-337-153511934359257/AnsiballZ_stat.py'
Nov 24 18:31:29 compute-0 sudo[145735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:29 compute-0 python3.9[145737]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:31:29 compute-0 sudo[145735]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:30 compute-0 sudo[145813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oirokpkvvglctiolcerxidyttdxkyhlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009089.2764528-337-153511934359257/AnsiballZ_file.py'
Nov 24 18:31:30 compute-0 sudo[145813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:30 compute-0 python3.9[145815]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:31:30 compute-0 sudo[145813]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:30 compute-0 sudo[145965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfmergjoaayunsaofghgzifqvdzxncbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009090.5317087-349-258080607361428/AnsiballZ_stat.py'
Nov 24 18:31:30 compute-0 sudo[145965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:31 compute-0 python3.9[145967]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:31:31 compute-0 sudo[145965]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:31 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v469: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:31 compute-0 sudo[146043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsebrzsnrthkjsvvfjzxbndigtgbcpvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009090.5317087-349-258080607361428/AnsiballZ_file.py'
Nov 24 18:31:31 compute-0 sudo[146043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:31 compute-0 python3.9[146045]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:31:31 compute-0 sudo[146043]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:32 compute-0 sudo[146195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwejqbuegiplvrlqudnpicazgcsbzqrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009091.8331158-362-105014404730276/AnsiballZ_command.py'
Nov 24 18:31:32 compute-0 sudo[146195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:32 compute-0 ceph-mon[74927]: pgmap v469: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:32 compute-0 python3.9[146197]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:31:32 compute-0 sudo[146195]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:32 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:31:33 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v470: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:33 compute-0 sudo[146350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxgmqjegogcijbtsxmnyfhfoiqlvmpbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009092.6700447-370-128394072939183/AnsiballZ_blockinfile.py'
Nov 24 18:31:33 compute-0 sudo[146350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:33 compute-0 ceph-mon[74927]: pgmap v470: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:33 compute-0 python3.9[146352]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:31:33 compute-0 sudo[146350]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:33 compute-0 sudo[146502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmezwpxwhhssaabwinvprvoyvavhldwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009093.693477-379-221365052318036/AnsiballZ_file.py'
Nov 24 18:31:33 compute-0 sudo[146502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:34 compute-0 python3.9[146504]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:31:34 compute-0 sudo[146502]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:34 compute-0 sudo[146654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toazhcqpryoscgqvcjtbwnlzhcloeoos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009094.2581697-379-146126109061752/AnsiballZ_file.py'
Nov 24 18:31:34 compute-0 sudo[146654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:31:34
Nov 24 18:31:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:31:34 compute-0 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 18:31:34 compute-0 ceph-mgr[75218]: [balancer INFO root] pools ['vms', '.mgr', 'volumes', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log']
Nov 24 18:31:34 compute-0 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 18:31:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:31:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:31:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:31:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:31:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:31:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:31:34 compute-0 python3.9[146656]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:31:34 compute-0 sudo[146654]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:31:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:31:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:31:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:31:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:31:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:31:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:31:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:31:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:31:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:31:35 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v471: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:35 compute-0 sudo[146806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esmoelbizoceqzwgxlqjszgzkftfwbfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009094.9000878-394-267475181878984/AnsiballZ_mount.py'
Nov 24 18:31:35 compute-0 sudo[146806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:35 compute-0 python3.9[146808]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 24 18:31:35 compute-0 sudo[146806]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:35 compute-0 sudo[146958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afvigfrxzltwxnhhvwxsxocidlgzgull ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009095.751454-394-163281575614260/AnsiballZ_mount.py'
Nov 24 18:31:35 compute-0 sudo[146958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:36 compute-0 python3.9[146960]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 24 18:31:36 compute-0 sudo[146958]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:36 compute-0 ceph-mon[74927]: pgmap v471: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:36 compute-0 sshd-session[138969]: Connection closed by 192.168.122.30 port 39984
Nov 24 18:31:36 compute-0 sshd-session[138966]: pam_unix(sshd:session): session closed for user zuul
Nov 24 18:31:36 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Nov 24 18:31:36 compute-0 systemd[1]: session-42.scope: Consumed 28.884s CPU time.
Nov 24 18:31:36 compute-0 systemd-logind[822]: Session 42 logged out. Waiting for processes to exit.
Nov 24 18:31:36 compute-0 systemd-logind[822]: Removed session 42.
Nov 24 18:31:37 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v472: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:37 compute-0 ceph-mon[74927]: pgmap v472: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:31:39 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v473: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:40 compute-0 ceph-mon[74927]: pgmap v473: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:41 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v474: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:41 compute-0 ceph-mon[74927]: pgmap v474: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:42 compute-0 sshd-session[146985]: Accepted publickey for zuul from 192.168.122.30 port 46250 ssh2: ECDSA SHA256:jrbRhTO0vQ5011EeK1ZbrK2vK+fcQyIDL9kUBqDHBYY
Nov 24 18:31:42 compute-0 systemd-logind[822]: New session 43 of user zuul.
Nov 24 18:31:42 compute-0 systemd[1]: Started Session 43 of User zuul.
Nov 24 18:31:42 compute-0 sshd-session[146985]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 18:31:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:31:42 compute-0 sudo[147138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpqfrtkaxtnnpuuektkoxocytmkupefr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009102.2649353-16-260817480536448/AnsiballZ_tempfile.py'
Nov 24 18:31:42 compute-0 sudo[147138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:43 compute-0 python3.9[147140]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 24 18:31:43 compute-0 sudo[147138]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:31:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:31:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:31:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:31:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:31:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:31:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:31:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:31:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:31:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:31:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:31:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:31:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 18:31:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:31:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:31:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:31:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 18:31:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:31:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 18:31:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:31:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:31:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:31:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 18:31:43 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v475: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:43 compute-0 sudo[147290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbglukbunesrldoglraivasdsmocxdok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009103.160279-28-125558002726198/AnsiballZ_stat.py'
Nov 24 18:31:43 compute-0 sudo[147290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:43 compute-0 python3.9[147292]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:31:43 compute-0 sudo[147290]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:44 compute-0 ceph-mon[74927]: pgmap v475: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:44 compute-0 sudo[147445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlrrggtbgqrwsjeyenluyzcrtbwcoxbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009103.9384415-36-62826932393343/AnsiballZ_slurp.py'
Nov 24 18:31:44 compute-0 sudo[147445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:44 compute-0 python3.9[147447]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Nov 24 18:31:44 compute-0 sudo[147445]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:44 compute-0 sudo[147597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axtpbyoipecfbclclirvgninynicwclo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009104.7344763-44-148762557414672/AnsiballZ_stat.py'
Nov 24 18:31:44 compute-0 sudo[147597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:45 compute-0 python3.9[147599]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.3up6pgcf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:31:45 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v476: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:45 compute-0 sudo[147597]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:45 compute-0 ceph-mon[74927]: pgmap v476: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:45 compute-0 sudo[147722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgbvrskgnhxgazdwxuxoyavklnuwmlvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009104.7344763-44-148762557414672/AnsiballZ_copy.py'
Nov 24 18:31:45 compute-0 sudo[147722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:45 compute-0 python3.9[147724]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.3up6pgcf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764009104.7344763-44-148762557414672/.source.3up6pgcf _original_basename=.h7unejvv follow=False checksum=c8681bd5f60cfe8e414de701936dcfa8bc77df8f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:31:45 compute-0 sudo[147722]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:46 compute-0 sudo[147874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivuiiqjbpczrudzcmsoiigknazzjybro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009106.0964808-59-61817844892161/AnsiballZ_setup.py'
Nov 24 18:31:46 compute-0 sudo[147874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:46 compute-0 python3.9[147876]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:31:46 compute-0 sudo[147874]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:47 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v477: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:47 compute-0 ceph-mon[74927]: pgmap v477: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:31:47 compute-0 sudo[148026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubbkpljrmthbgghxapfukktbajddrgdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009107.2566876-68-164567062281906/AnsiballZ_blockinfile.py'
Nov 24 18:31:47 compute-0 sudo[148026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:47 compute-0 python3.9[148028]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDhS8frVtJkphIV3qjYEBaOrfFAUD1SVRr7LLCHE4Oz5qMeQHKYm90YB9nO7ntC/BIXenfYoTm6fYVn1JaiGoGSQdRBXPQG/o6WD6Ec3pD/Mcl/KMJGYuMHxaEizMQ3wOpo20hOTbEsu6v2y+3ETjeAG0UF9fWh/vCDy6bX0hMh8o7mf9skIV8gvWuCbJo4Vk92qBh7z9qccV5j5J5maU9c28+VEF1nlN0GSyYT/IRFdD7gDE7QFZ9QpapaWGSFE7nCTgz4Mw4nnJ+KaxvkxxHf4knCpDxk59+uk/+9G8oUiFokkDbJiPI6sZS+BALztR/CzJpNrAYaYmhzjbSRYb51wPj5EnXYzqgik4JzhmsqsepLD79RGK2b4ZWnQVP7WFOUL+Wm4+MkbF0LVmcy1XJeA5yhmhodU+fpO1t1SZRONc1eqep1NVqxMOHXOQgKGpIAg95Vpx9szp5NhOkzp1cQTeEhxfog0RyENmd9NxKBpu3NmtFN+dETuLT2Co1JMhM=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA2lZlyCN0FJ/jD1EDSdkabXa5aE54G6xn7+v3fPL+BD
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHFHJ7xweyewLWbij/U6h4iEFO2zmE+OAqJetXAaVahyXo6KOKB5z+dQ1ItOa9RPE9AAjyAVton3sCrkTSjqY88=
                                              create=True mode=0644 path=/tmp/ansible.3up6pgcf state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:31:47 compute-0 sudo[148026]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:48 compute-0 sudo[148178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqtmroebyglxopxmttjqncdzwdckjqlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009108.1582859-76-125327098189768/AnsiballZ_command.py'
Nov 24 18:31:48 compute-0 sudo[148178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:48 compute-0 python3.9[148180]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.3up6pgcf' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:31:48 compute-0 sudo[148178]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:49 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v478: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:49 compute-0 sudo[148332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khenxkoqsgiimyoilhvhcwnctkvooctb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009109.072116-84-35748438574631/AnsiballZ_file.py'
Nov 24 18:31:49 compute-0 sudo[148332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:49 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 24 18:31:49 compute-0 python3.9[148334]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.3up6pgcf state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:31:49 compute-0 sudo[148332]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:50 compute-0 sshd-session[146988]: Connection closed by 192.168.122.30 port 46250
Nov 24 18:31:50 compute-0 sshd-session[146985]: pam_unix(sshd:session): session closed for user zuul
Nov 24 18:31:50 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Nov 24 18:31:50 compute-0 systemd[1]: session-43.scope: Consumed 5.044s CPU time.
Nov 24 18:31:50 compute-0 systemd-logind[822]: Session 43 logged out. Waiting for processes to exit.
Nov 24 18:31:50 compute-0 ceph-mon[74927]: pgmap v478: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:50 compute-0 systemd-logind[822]: Removed session 43.
Nov 24 18:31:51 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v479: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:51 compute-0 ceph-mon[74927]: pgmap v479: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:31:53 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v480: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:53 compute-0 ceph-mon[74927]: pgmap v480: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:55 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v481: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:55 compute-0 ceph-mon[74927]: pgmap v481: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:55 compute-0 sshd-session[148361]: Accepted publickey for zuul from 192.168.122.30 port 39942 ssh2: ECDSA SHA256:jrbRhTO0vQ5011EeK1ZbrK2vK+fcQyIDL9kUBqDHBYY
Nov 24 18:31:55 compute-0 systemd-logind[822]: New session 44 of user zuul.
Nov 24 18:31:55 compute-0 systemd[1]: Started Session 44 of User zuul.
Nov 24 18:31:55 compute-0 sshd-session[148361]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 18:31:56 compute-0 python3.9[148514]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:31:57 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v482: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:57 compute-0 ceph-mon[74927]: pgmap v482: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:31:57 compute-0 sudo[148668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xouiejeykfbeveqmoulwxijmpavvidti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009117.0205338-32-115822115878599/AnsiballZ_systemd.py'
Nov 24 18:31:57 compute-0 sudo[148668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:57 compute-0 python3.9[148670]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 24 18:31:57 compute-0 sudo[148668]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:58 compute-0 sudo[148822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvsbnbmgkykalfpeavusoqjzjobrzzmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009118.135988-40-255243892859735/AnsiballZ_systemd.py'
Nov 24 18:31:58 compute-0 sudo[148822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:58 compute-0 python3.9[148824]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 18:31:58 compute-0 sudo[148822]: pam_unix(sudo:session): session closed for user root
Nov 24 18:31:59 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v483: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:59 compute-0 ceph-mon[74927]: pgmap v483: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:31:59 compute-0 sudo[148975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viblbhghtfqaojawlmkkbxfdewohexqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009119.0342596-49-60912169565279/AnsiballZ_command.py'
Nov 24 18:31:59 compute-0 sudo[148975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:31:59 compute-0 python3.9[148977]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:31:59 compute-0 sudo[148975]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:00 compute-0 sudo[149128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocrjdtekhtyvwfbvypwsjkufwvcrpojv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009119.8639107-57-207141747205576/AnsiballZ_stat.py'
Nov 24 18:32:00 compute-0 sudo[149128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:00 compute-0 python3.9[149130]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:32:00 compute-0 sudo[149128]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:01 compute-0 sudo[149280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnezwsvlluzcnssmsttsphfrnehckjrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009120.708264-66-159234693587424/AnsiballZ_file.py'
Nov 24 18:32:01 compute-0 sudo[149280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:01 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v484: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:01 compute-0 python3.9[149282]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:32:01 compute-0 ceph-mon[74927]: pgmap v484: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:01 compute-0 sudo[149280]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:01 compute-0 sshd-session[148364]: Connection closed by 192.168.122.30 port 39942
Nov 24 18:32:01 compute-0 sshd-session[148361]: pam_unix(sshd:session): session closed for user zuul
Nov 24 18:32:01 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Nov 24 18:32:01 compute-0 systemd[1]: session-44.scope: Consumed 3.748s CPU time.
Nov 24 18:32:01 compute-0 systemd-logind[822]: Session 44 logged out. Waiting for processes to exit.
Nov 24 18:32:01 compute-0 systemd-logind[822]: Removed session 44.
Nov 24 18:32:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:32:03 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v485: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:03 compute-0 ceph-mon[74927]: pgmap v485: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:04 compute-0 sudo[149307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:32:04 compute-0 sudo[149307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:32:04 compute-0 sudo[149307]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:04 compute-0 sudo[149332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:32:04 compute-0 sudo[149332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:32:04 compute-0 sudo[149332]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:04 compute-0 sudo[149357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:32:04 compute-0 sudo[149357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:32:04 compute-0 sudo[149357]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:04 compute-0 sudo[149382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 24 18:32:04 compute-0 sudo[149382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:32:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:32:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:32:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:32:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:32:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:32:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:32:04 compute-0 podman[149479]: 2025-11-24 18:32:04.895259674 +0000 UTC m=+0.067768704 container exec 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 18:32:04 compute-0 podman[149479]: 2025-11-24 18:32:04.980891708 +0000 UTC m=+0.153400718 container exec_died 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 18:32:05 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v486: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 18:32:05 compute-0 ceph-mon[74927]: pgmap v486: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 18:32:05 compute-0 sudo[149382]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:05 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:32:05 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:32:05 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:32:05 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:32:05 compute-0 sudo[149637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:32:05 compute-0 sudo[149637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:32:05 compute-0 sudo[149637]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:05 compute-0 sudo[149662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:32:05 compute-0 sudo[149662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:32:05 compute-0 sudo[149662]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:05 compute-0 sudo[149687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:32:05 compute-0 sudo[149687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:32:05 compute-0 sudo[149687]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:05 compute-0 sudo[149712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 18:32:05 compute-0 sudo[149712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:32:06 compute-0 sudo[149712]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:06 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:32:06 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:32:06 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:32:06 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:32:06 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:32:06 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:32:06 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 65954c29-d63d-4cea-b757-9e87f020651d does not exist
Nov 24 18:32:06 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev f8f4b38e-dade-4c4b-be85-4bfeaaf07911 does not exist
Nov 24 18:32:06 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev a454fb02-3cf5-4a11-b222-fc95716b94a3 does not exist
Nov 24 18:32:06 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 18:32:06 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:32:06 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 18:32:06 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:32:06 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:32:06 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:32:06 compute-0 sudo[149769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:32:06 compute-0 sudo[149769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:32:06 compute-0 sudo[149769]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:06 compute-0 sudo[149794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:32:06 compute-0 sudo[149794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:32:06 compute-0 sudo[149794]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:06 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:32:06 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:32:06 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:32:06 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:32:06 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:32:06 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:32:06 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:32:06 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:32:06 compute-0 sudo[149819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:32:06 compute-0 sudo[149819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:32:06 compute-0 sudo[149819]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:06 compute-0 sudo[149845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 18:32:06 compute-0 sudo[149845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:32:06 compute-0 sshd-session[149844]: Accepted publickey for zuul from 192.168.122.30 port 33894 ssh2: ECDSA SHA256:jrbRhTO0vQ5011EeK1ZbrK2vK+fcQyIDL9kUBqDHBYY
Nov 24 18:32:06 compute-0 systemd-logind[822]: New session 45 of user zuul.
Nov 24 18:32:06 compute-0 systemd[1]: Started Session 45 of User zuul.
Nov 24 18:32:06 compute-0 sshd-session[149844]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 18:32:06 compute-0 podman[149965]: 2025-11-24 18:32:06.894203959 +0000 UTC m=+0.042389032 container create e3a101a9ff9114ae5567d6ff4a04d04ecc2660bd6018c04001b380dc1ae03976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_sinoussi, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 18:32:06 compute-0 systemd[1]: Started libpod-conmon-e3a101a9ff9114ae5567d6ff4a04d04ecc2660bd6018c04001b380dc1ae03976.scope.
Nov 24 18:32:06 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:32:06 compute-0 podman[149965]: 2025-11-24 18:32:06.873500566 +0000 UTC m=+0.021685709 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:32:06 compute-0 podman[149965]: 2025-11-24 18:32:06.977596515 +0000 UTC m=+0.125781608 container init e3a101a9ff9114ae5567d6ff4a04d04ecc2660bd6018c04001b380dc1ae03976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_sinoussi, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 24 18:32:06 compute-0 podman[149965]: 2025-11-24 18:32:06.984303607 +0000 UTC m=+0.132488680 container start e3a101a9ff9114ae5567d6ff4a04d04ecc2660bd6018c04001b380dc1ae03976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_sinoussi, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:32:06 compute-0 podman[149965]: 2025-11-24 18:32:06.987418998 +0000 UTC m=+0.135604121 container attach e3a101a9ff9114ae5567d6ff4a04d04ecc2660bd6018c04001b380dc1ae03976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:32:06 compute-0 bold_sinoussi[149982]: 167 167
Nov 24 18:32:06 compute-0 systemd[1]: libpod-e3a101a9ff9114ae5567d6ff4a04d04ecc2660bd6018c04001b380dc1ae03976.scope: Deactivated successfully.
Nov 24 18:32:06 compute-0 conmon[149982]: conmon e3a101a9ff9114ae5567 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e3a101a9ff9114ae5567d6ff4a04d04ecc2660bd6018c04001b380dc1ae03976.scope/container/memory.events
Nov 24 18:32:06 compute-0 podman[149965]: 2025-11-24 18:32:06.99061569 +0000 UTC m=+0.138800773 container died e3a101a9ff9114ae5567d6ff4a04d04ecc2660bd6018c04001b380dc1ae03976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:32:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-bff906e1ca594529c2f5f1bc8ff083c531caa535da868a9861c55aba79b210f8-merged.mount: Deactivated successfully.
Nov 24 18:32:07 compute-0 podman[149965]: 2025-11-24 18:32:07.033075022 +0000 UTC m=+0.181260105 container remove e3a101a9ff9114ae5567d6ff4a04d04ecc2660bd6018c04001b380dc1ae03976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:32:07 compute-0 systemd[1]: libpod-conmon-e3a101a9ff9114ae5567d6ff4a04d04ecc2660bd6018c04001b380dc1ae03976.scope: Deactivated successfully.
Nov 24 18:32:07 compute-0 podman[150007]: 2025-11-24 18:32:07.182733193 +0000 UTC m=+0.042965076 container create 3713f05f57bff28f136b987b49727054cfa2efd9d668dcc95c975648d59e3f50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_saha, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 18:32:07 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v487: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 18:32:07 compute-0 systemd[1]: Started libpod-conmon-3713f05f57bff28f136b987b49727054cfa2efd9d668dcc95c975648d59e3f50.scope.
Nov 24 18:32:07 compute-0 podman[150007]: 2025-11-24 18:32:07.162466472 +0000 UTC m=+0.022698435 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:32:07 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:32:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a5891960ed5ccfc0240d643f1c8bc2988b4f016e19472c935b8f147a38c5dd2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:32:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a5891960ed5ccfc0240d643f1c8bc2988b4f016e19472c935b8f147a38c5dd2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:32:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a5891960ed5ccfc0240d643f1c8bc2988b4f016e19472c935b8f147a38c5dd2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:32:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a5891960ed5ccfc0240d643f1c8bc2988b4f016e19472c935b8f147a38c5dd2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:32:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a5891960ed5ccfc0240d643f1c8bc2988b4f016e19472c935b8f147a38c5dd2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:32:07 compute-0 podman[150007]: 2025-11-24 18:32:07.27588945 +0000 UTC m=+0.136121333 container init 3713f05f57bff28f136b987b49727054cfa2efd9d668dcc95c975648d59e3f50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_saha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 24 18:32:07 compute-0 podman[150007]: 2025-11-24 18:32:07.284731728 +0000 UTC m=+0.144963591 container start 3713f05f57bff28f136b987b49727054cfa2efd9d668dcc95c975648d59e3f50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_saha, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:32:07 compute-0 podman[150007]: 2025-11-24 18:32:07.287892469 +0000 UTC m=+0.148124352 container attach 3713f05f57bff28f136b987b49727054cfa2efd9d668dcc95c975648d59e3f50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_saha, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:32:07 compute-0 ceph-mon[74927]: pgmap v487: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 18:32:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:32:07 compute-0 python3.9[150125]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:32:08 compute-0 distracted_saha[150041]: --> passed data devices: 0 physical, 3 LVM
Nov 24 18:32:08 compute-0 distracted_saha[150041]: --> relative data size: 1.0
Nov 24 18:32:08 compute-0 distracted_saha[150041]: --> All data devices are unavailable
Nov 24 18:32:08 compute-0 systemd[1]: libpod-3713f05f57bff28f136b987b49727054cfa2efd9d668dcc95c975648d59e3f50.scope: Deactivated successfully.
Nov 24 18:32:08 compute-0 systemd[1]: libpod-3713f05f57bff28f136b987b49727054cfa2efd9d668dcc95c975648d59e3f50.scope: Consumed 1.009s CPU time.
Nov 24 18:32:08 compute-0 podman[150007]: 2025-11-24 18:32:08.360253562 +0000 UTC m=+1.220485445 container died 3713f05f57bff28f136b987b49727054cfa2efd9d668dcc95c975648d59e3f50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 24 18:32:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a5891960ed5ccfc0240d643f1c8bc2988b4f016e19472c935b8f147a38c5dd2-merged.mount: Deactivated successfully.
Nov 24 18:32:08 compute-0 podman[150007]: 2025-11-24 18:32:08.411258085 +0000 UTC m=+1.271489958 container remove 3713f05f57bff28f136b987b49727054cfa2efd9d668dcc95c975648d59e3f50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_saha, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Nov 24 18:32:08 compute-0 systemd[1]: libpod-conmon-3713f05f57bff28f136b987b49727054cfa2efd9d668dcc95c975648d59e3f50.scope: Deactivated successfully.
Nov 24 18:32:08 compute-0 sudo[149845]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:08 compute-0 sudo[150289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:32:08 compute-0 sudo[150337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpgxviclwqhyhgupwaybksvujsnlzeqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009128.2033029-34-205816826575382/AnsiballZ_setup.py'
Nov 24 18:32:08 compute-0 sudo[150289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:32:08 compute-0 sudo[150337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:08 compute-0 sudo[150289]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:08 compute-0 sudo[150342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:32:08 compute-0 sudo[150342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:32:08 compute-0 sudo[150342]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:08 compute-0 sudo[150367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:32:08 compute-0 sudo[150367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:32:08 compute-0 sudo[150367]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:08 compute-0 sudo[150392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- lvm list --format json
Nov 24 18:32:08 compute-0 sudo[150392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:32:08 compute-0 python3.9[150341]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 18:32:08 compute-0 podman[150462]: 2025-11-24 18:32:08.962371265 +0000 UTC m=+0.034124649 container create 51c99b9ecce86ecf7e8334eed4b4bb0bf8d9c3b0cf9fee63ebd5f91b869bf978 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 24 18:32:08 compute-0 systemd[1]: Started libpod-conmon-51c99b9ecce86ecf7e8334eed4b4bb0bf8d9c3b0cf9fee63ebd5f91b869bf978.scope.
Nov 24 18:32:09 compute-0 sudo[150337]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:09 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:32:09 compute-0 podman[150462]: 2025-11-24 18:32:09.03135907 +0000 UTC m=+0.103112474 container init 51c99b9ecce86ecf7e8334eed4b4bb0bf8d9c3b0cf9fee63ebd5f91b869bf978 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_banzai, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 24 18:32:09 compute-0 podman[150462]: 2025-11-24 18:32:09.036957784 +0000 UTC m=+0.108711168 container start 51c99b9ecce86ecf7e8334eed4b4bb0bf8d9c3b0cf9fee63ebd5f91b869bf978 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_banzai, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 18:32:09 compute-0 podman[150462]: 2025-11-24 18:32:09.03953498 +0000 UTC m=+0.111288384 container attach 51c99b9ecce86ecf7e8334eed4b4bb0bf8d9c3b0cf9fee63ebd5f91b869bf978 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_banzai, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:32:09 compute-0 upbeat_banzai[150481]: 167 167
Nov 24 18:32:09 compute-0 systemd[1]: libpod-51c99b9ecce86ecf7e8334eed4b4bb0bf8d9c3b0cf9fee63ebd5f91b869bf978.scope: Deactivated successfully.
Nov 24 18:32:09 compute-0 podman[150462]: 2025-11-24 18:32:09.040784952 +0000 UTC m=+0.112538336 container died 51c99b9ecce86ecf7e8334eed4b4bb0bf8d9c3b0cf9fee63ebd5f91b869bf978 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_banzai, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:32:09 compute-0 podman[150462]: 2025-11-24 18:32:08.948465707 +0000 UTC m=+0.020219121 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:32:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-30a093f55d6e4f2269e043db37f214476b9df1d83af476eda3a3ac260bb632ea-merged.mount: Deactivated successfully.
Nov 24 18:32:09 compute-0 podman[150462]: 2025-11-24 18:32:09.073163366 +0000 UTC m=+0.144916740 container remove 51c99b9ecce86ecf7e8334eed4b4bb0bf8d9c3b0cf9fee63ebd5f91b869bf978 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 18:32:09 compute-0 systemd[1]: libpod-conmon-51c99b9ecce86ecf7e8334eed4b4bb0bf8d9c3b0cf9fee63ebd5f91b869bf978.scope: Deactivated successfully.
Nov 24 18:32:09 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v488: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 18:32:09 compute-0 podman[150506]: 2025-11-24 18:32:09.213365663 +0000 UTC m=+0.034777006 container create e016556bf05c3f74b7a9a18ff6a20080a3eb359be381a7607c3b57ef0467fdca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:32:09 compute-0 systemd[1]: Started libpod-conmon-e016556bf05c3f74b7a9a18ff6a20080a3eb359be381a7607c3b57ef0467fdca.scope.
Nov 24 18:32:09 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:32:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/729504113eb1c468da14c3d70930e90fe48d1f274b842d0bd495a5c290bfef91/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:32:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/729504113eb1c468da14c3d70930e90fe48d1f274b842d0bd495a5c290bfef91/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:32:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/729504113eb1c468da14c3d70930e90fe48d1f274b842d0bd495a5c290bfef91/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:32:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/729504113eb1c468da14c3d70930e90fe48d1f274b842d0bd495a5c290bfef91/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:32:09 compute-0 podman[150506]: 2025-11-24 18:32:09.278788047 +0000 UTC m=+0.100199410 container init e016556bf05c3f74b7a9a18ff6a20080a3eb359be381a7607c3b57ef0467fdca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khorana, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:32:09 compute-0 podman[150506]: 2025-11-24 18:32:09.284827442 +0000 UTC m=+0.106238785 container start e016556bf05c3f74b7a9a18ff6a20080a3eb359be381a7607c3b57ef0467fdca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khorana, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 24 18:32:09 compute-0 podman[150506]: 2025-11-24 18:32:09.288286001 +0000 UTC m=+0.109697344 container attach e016556bf05c3f74b7a9a18ff6a20080a3eb359be381a7607c3b57ef0467fdca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:32:09 compute-0 podman[150506]: 2025-11-24 18:32:09.198245444 +0000 UTC m=+0.019656817 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:32:09 compute-0 ceph-mon[74927]: pgmap v488: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 18:32:09 compute-0 sudo[150600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vflniyqxrivpyvxbgtepsuzirnpgjudi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009128.2033029-34-205816826575382/AnsiballZ_dnf.py'
Nov 24 18:32:09 compute-0 sudo[150600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:09 compute-0 python3.9[150602]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 24 18:32:09 compute-0 exciting_khorana[150522]: {
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:     "0": [
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:         {
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:             "devices": [
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:                 "/dev/loop3"
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:             ],
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:             "lv_name": "ceph_lv0",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:             "lv_size": "21470642176",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:             "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:             "name": "ceph_lv0",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:             "tags": {
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:                 "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:                 "ceph.cluster_name": "ceph",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:                 "ceph.crush_device_class": "",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:                 "ceph.encrypted": "0",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:                 "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:                 "ceph.osd_id": "0",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:                 "ceph.type": "block",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:                 "ceph.vdo": "0"
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:             },
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:             "type": "block",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:             "vg_name": "ceph_vg0"
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:         }
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:     ],
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:     "1": [
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:         {
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:             "devices": [
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:                 "/dev/loop4"
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:             ],
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:             "lv_name": "ceph_lv1",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:             "lv_size": "21470642176",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:             "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:             "name": "ceph_lv1",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:             "tags": {
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:                 "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:                 "ceph.cluster_name": "ceph",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:                 "ceph.crush_device_class": "",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:                 "ceph.encrypted": "0",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:                 "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:                 "ceph.osd_id": "1",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:                 "ceph.type": "block",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:                 "ceph.vdo": "0"
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:             },
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:             "type": "block",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:             "vg_name": "ceph_vg1"
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:         }
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:     ],
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:     "2": [
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:         {
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:             "devices": [
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:                 "/dev/loop5"
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:             ],
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:             "lv_name": "ceph_lv2",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:             "lv_size": "21470642176",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:             "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:             "name": "ceph_lv2",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:             "tags": {
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:                 "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:                 "ceph.cluster_name": "ceph",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:                 "ceph.crush_device_class": "",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:                 "ceph.encrypted": "0",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:                 "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:                 "ceph.osd_id": "2",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:                 "ceph.type": "block",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:                 "ceph.vdo": "0"
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:             },
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:             "type": "block",
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:             "vg_name": "ceph_vg2"
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:         }
Nov 24 18:32:09 compute-0 exciting_khorana[150522]:     ]
Nov 24 18:32:09 compute-0 exciting_khorana[150522]: }
Nov 24 18:32:10 compute-0 systemd[1]: libpod-e016556bf05c3f74b7a9a18ff6a20080a3eb359be381a7607c3b57ef0467fdca.scope: Deactivated successfully.
Nov 24 18:32:10 compute-0 conmon[150522]: conmon e016556bf05c3f74b7a9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e016556bf05c3f74b7a9a18ff6a20080a3eb359be381a7607c3b57ef0467fdca.scope/container/memory.events
Nov 24 18:32:10 compute-0 podman[150506]: 2025-11-24 18:32:10.02433164 +0000 UTC m=+0.845742983 container died e016556bf05c3f74b7a9a18ff6a20080a3eb359be381a7607c3b57ef0467fdca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:32:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-729504113eb1c468da14c3d70930e90fe48d1f274b842d0bd495a5c290bfef91-merged.mount: Deactivated successfully.
Nov 24 18:32:10 compute-0 podman[150506]: 2025-11-24 18:32:10.126836678 +0000 UTC m=+0.948248031 container remove e016556bf05c3f74b7a9a18ff6a20080a3eb359be381a7607c3b57ef0467fdca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khorana, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 18:32:10 compute-0 systemd[1]: libpod-conmon-e016556bf05c3f74b7a9a18ff6a20080a3eb359be381a7607c3b57ef0467fdca.scope: Deactivated successfully.
Nov 24 18:32:10 compute-0 sudo[150392]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:10 compute-0 sudo[150618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:32:10 compute-0 sudo[150618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:32:10 compute-0 sudo[150618]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:10 compute-0 sudo[150643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:32:10 compute-0 sudo[150643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:32:10 compute-0 sudo[150643]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:10 compute-0 sudo[150668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:32:10 compute-0 sudo[150668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:32:10 compute-0 sudo[150668]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:10 compute-0 sudo[150693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- raw list --format json
Nov 24 18:32:10 compute-0 sudo[150693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:32:10 compute-0 podman[150759]: 2025-11-24 18:32:10.754766455 +0000 UTC m=+0.046266421 container create 6a6924c446872f1b238103b5b44e260d8d913cc242a7b42fcdaac5c1db614fb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 24 18:32:10 compute-0 systemd[1]: Started libpod-conmon-6a6924c446872f1b238103b5b44e260d8d913cc242a7b42fcdaac5c1db614fb9.scope.
Nov 24 18:32:10 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:32:10 compute-0 podman[150759]: 2025-11-24 18:32:10.732628036 +0000 UTC m=+0.024128002 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:32:10 compute-0 podman[150759]: 2025-11-24 18:32:10.834587619 +0000 UTC m=+0.126087575 container init 6a6924c446872f1b238103b5b44e260d8d913cc242a7b42fcdaac5c1db614fb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 18:32:10 compute-0 podman[150759]: 2025-11-24 18:32:10.84276814 +0000 UTC m=+0.134268096 container start 6a6924c446872f1b238103b5b44e260d8d913cc242a7b42fcdaac5c1db614fb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hopper, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 18:32:10 compute-0 podman[150759]: 2025-11-24 18:32:10.846718941 +0000 UTC m=+0.138218917 container attach 6a6924c446872f1b238103b5b44e260d8d913cc242a7b42fcdaac5c1db614fb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hopper, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 24 18:32:10 compute-0 bold_hopper[150776]: 167 167
Nov 24 18:32:10 compute-0 systemd[1]: libpod-6a6924c446872f1b238103b5b44e260d8d913cc242a7b42fcdaac5c1db614fb9.scope: Deactivated successfully.
Nov 24 18:32:10 compute-0 podman[150759]: 2025-11-24 18:32:10.849215186 +0000 UTC m=+0.140715162 container died 6a6924c446872f1b238103b5b44e260d8d913cc242a7b42fcdaac5c1db614fb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 24 18:32:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-43e474309d5feea4339de344308dc1df45bd266d4b874e3cd055203d364d3b22-merged.mount: Deactivated successfully.
Nov 24 18:32:10 compute-0 podman[150759]: 2025-11-24 18:32:10.906099259 +0000 UTC m=+0.197599215 container remove 6a6924c446872f1b238103b5b44e260d8d913cc242a7b42fcdaac5c1db614fb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 18:32:10 compute-0 systemd[1]: libpod-conmon-6a6924c446872f1b238103b5b44e260d8d913cc242a7b42fcdaac5c1db614fb9.scope: Deactivated successfully.
Nov 24 18:32:11 compute-0 podman[150800]: 2025-11-24 18:32:11.06896739 +0000 UTC m=+0.042189086 container create 61706b493dbc806153bfd46a44f6ed2d0b211ee5d1ee7e8f1e468127ee4adb23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_borg, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 24 18:32:11 compute-0 sudo[150600]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:11 compute-0 systemd[1]: Started libpod-conmon-61706b493dbc806153bfd46a44f6ed2d0b211ee5d1ee7e8f1e468127ee4adb23.scope.
Nov 24 18:32:11 compute-0 podman[150800]: 2025-11-24 18:32:11.053229475 +0000 UTC m=+0.026451181 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:32:11 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:32:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1824eef510f4d3e822fb61fc9033349f9dc2e75e29c4fadcef238f10d19b6974/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:32:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1824eef510f4d3e822fb61fc9033349f9dc2e75e29c4fadcef238f10d19b6974/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:32:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1824eef510f4d3e822fb61fc9033349f9dc2e75e29c4fadcef238f10d19b6974/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:32:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1824eef510f4d3e822fb61fc9033349f9dc2e75e29c4fadcef238f10d19b6974/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:32:11 compute-0 podman[150800]: 2025-11-24 18:32:11.166643344 +0000 UTC m=+0.139865070 container init 61706b493dbc806153bfd46a44f6ed2d0b211ee5d1ee7e8f1e468127ee4adb23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 24 18:32:11 compute-0 podman[150800]: 2025-11-24 18:32:11.173342296 +0000 UTC m=+0.146563982 container start 61706b493dbc806153bfd46a44f6ed2d0b211ee5d1ee7e8f1e468127ee4adb23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_borg, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:32:11 compute-0 podman[150800]: 2025-11-24 18:32:11.176309292 +0000 UTC m=+0.149530968 container attach 61706b493dbc806153bfd46a44f6ed2d0b211ee5d1ee7e8f1e468127ee4adb23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 18:32:11 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v489: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 18:32:11 compute-0 ceph-mon[74927]: pgmap v489: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 18:32:11 compute-0 python3.9[150971]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:32:12 compute-0 busy_borg[150821]: {
Nov 24 18:32:12 compute-0 busy_borg[150821]:     "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 18:32:12 compute-0 busy_borg[150821]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:32:12 compute-0 busy_borg[150821]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 18:32:12 compute-0 busy_borg[150821]:         "osd_id": 0,
Nov 24 18:32:12 compute-0 busy_borg[150821]:         "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:32:12 compute-0 busy_borg[150821]:         "type": "bluestore"
Nov 24 18:32:12 compute-0 busy_borg[150821]:     },
Nov 24 18:32:12 compute-0 busy_borg[150821]:     "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 18:32:12 compute-0 busy_borg[150821]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:32:12 compute-0 busy_borg[150821]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 18:32:12 compute-0 busy_borg[150821]:         "osd_id": 1,
Nov 24 18:32:12 compute-0 busy_borg[150821]:         "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:32:12 compute-0 busy_borg[150821]:         "type": "bluestore"
Nov 24 18:32:12 compute-0 busy_borg[150821]:     },
Nov 24 18:32:12 compute-0 busy_borg[150821]:     "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 18:32:12 compute-0 busy_borg[150821]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:32:12 compute-0 busy_borg[150821]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 18:32:12 compute-0 busy_borg[150821]:         "osd_id": 2,
Nov 24 18:32:12 compute-0 busy_borg[150821]:         "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:32:12 compute-0 busy_borg[150821]:         "type": "bluestore"
Nov 24 18:32:12 compute-0 busy_borg[150821]:     }
Nov 24 18:32:12 compute-0 busy_borg[150821]: }
Nov 24 18:32:12 compute-0 systemd[1]: libpod-61706b493dbc806153bfd46a44f6ed2d0b211ee5d1ee7e8f1e468127ee4adb23.scope: Deactivated successfully.
Nov 24 18:32:12 compute-0 podman[151001]: 2025-11-24 18:32:12.14703536 +0000 UTC m=+0.020585410 container died 61706b493dbc806153bfd46a44f6ed2d0b211ee5d1ee7e8f1e468127ee4adb23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_borg, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:32:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-1824eef510f4d3e822fb61fc9033349f9dc2e75e29c4fadcef238f10d19b6974-merged.mount: Deactivated successfully.
Nov 24 18:32:12 compute-0 podman[151001]: 2025-11-24 18:32:12.19986618 +0000 UTC m=+0.073416240 container remove 61706b493dbc806153bfd46a44f6ed2d0b211ee5d1ee7e8f1e468127ee4adb23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_borg, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 18:32:12 compute-0 systemd[1]: libpod-conmon-61706b493dbc806153bfd46a44f6ed2d0b211ee5d1ee7e8f1e468127ee4adb23.scope: Deactivated successfully.
Nov 24 18:32:12 compute-0 sudo[150693]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:32:12 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:32:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:32:12 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:32:12 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 151f78d7-dfb8-461e-95aa-0968b06a48f7 does not exist
Nov 24 18:32:12 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev f9841239-336b-49d5-9056-563aa15af362 does not exist
Nov 24 18:32:12 compute-0 sudo[151016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:32:12 compute-0 sudo[151016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:32:12 compute-0 sudo[151016]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:12 compute-0 sudo[151065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:32:12 compute-0 sudo[151065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:32:12 compute-0 sudo[151065]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:32:13 compute-0 python3.9[151215]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 18:32:13 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v490: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 18:32:13 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:32:13 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:32:13 compute-0 python3.9[151365]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:32:14 compute-0 ceph-mon[74927]: pgmap v490: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 18:32:14 compute-0 python3.9[151515]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:32:14 compute-0 sshd-session[149872]: Connection closed by 192.168.122.30 port 33894
Nov 24 18:32:14 compute-0 sshd-session[149844]: pam_unix(sshd:session): session closed for user zuul
Nov 24 18:32:14 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Nov 24 18:32:14 compute-0 systemd[1]: session-45.scope: Consumed 5.645s CPU time.
Nov 24 18:32:14 compute-0 systemd-logind[822]: Session 45 logged out. Waiting for processes to exit.
Nov 24 18:32:14 compute-0 systemd-logind[822]: Removed session 45.
Nov 24 18:32:15 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v491: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 18:32:15 compute-0 ceph-mon[74927]: pgmap v491: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 18:32:17 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v492: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:17 compute-0 ceph-mon[74927]: pgmap v492: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:32:19 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v493: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:19 compute-0 ceph-mon[74927]: pgmap v493: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:20 compute-0 sshd-session[151540]: Accepted publickey for zuul from 192.168.122.30 port 38264 ssh2: ECDSA SHA256:jrbRhTO0vQ5011EeK1ZbrK2vK+fcQyIDL9kUBqDHBYY
Nov 24 18:32:20 compute-0 systemd-logind[822]: New session 46 of user zuul.
Nov 24 18:32:20 compute-0 systemd[1]: Started Session 46 of User zuul.
Nov 24 18:32:20 compute-0 sshd-session[151540]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 18:32:21 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v494: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:21 compute-0 ceph-mon[74927]: pgmap v494: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:21 compute-0 python3.9[151693]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:32:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:32:22 compute-0 sudo[151847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpflzhddpreactzjxlzmjlyyykjadati ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009142.5064614-50-126677134947541/AnsiballZ_file.py'
Nov 24 18:32:22 compute-0 sudo[151847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:23 compute-0 python3.9[151849]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:32:23 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v495: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:23 compute-0 sudo[151847]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:23 compute-0 ceph-mon[74927]: pgmap v495: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:23 compute-0 sudo[151999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmzurdnhbrmhbqfeeawpirwkjelavmex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009143.3980045-50-71447090060086/AnsiballZ_file.py'
Nov 24 18:32:23 compute-0 sudo[151999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:23 compute-0 python3.9[152001]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:32:23 compute-0 sudo[151999]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:24 compute-0 sudo[152151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crouvbpinnudebmweftnkawrvsqieadx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009144.1494124-65-121912473418158/AnsiballZ_stat.py'
Nov 24 18:32:24 compute-0 sudo[152151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:24 compute-0 python3.9[152153]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:32:24 compute-0 sudo[152151]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:25 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v496: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:25 compute-0 ceph-mon[74927]: pgmap v496: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:25 compute-0 sudo[152274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpqrrlklmwpodhqijjeyfxgfuazozgqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009144.1494124-65-121912473418158/AnsiballZ_copy.py'
Nov 24 18:32:25 compute-0 sudo[152274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:25 compute-0 python3.9[152276]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009144.1494124-65-121912473418158/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=04ab0229204e8e683e25d7b389e5447dda25fab6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:32:25 compute-0 sudo[152274]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:26 compute-0 sudo[152426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckpcgkqlygdfniondvwcsynbszihdemv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009145.837043-65-177097578861882/AnsiballZ_stat.py'
Nov 24 18:32:26 compute-0 sudo[152426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:26 compute-0 python3.9[152428]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:32:26 compute-0 sudo[152426]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:26 compute-0 sudo[152549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icciqokjvkmwhlzkyxbrgplsacwzfrvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009145.837043-65-177097578861882/AnsiballZ_copy.py'
Nov 24 18:32:26 compute-0 sudo[152549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:26 compute-0 python3.9[152551]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009145.837043-65-177097578861882/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=79429362a394ef2683f794df52ffa3b38ef1c939 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:32:26 compute-0 sudo[152549]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:27 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v497: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:27 compute-0 ceph-mon[74927]: pgmap v497: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:27 compute-0 sudo[152701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suywgshcftbvzledmzcmmdazfjdopngz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009147.0485177-65-155868746217925/AnsiballZ_stat.py'
Nov 24 18:32:27 compute-0 sudo[152701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:27 compute-0 python3.9[152703]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:32:27 compute-0 sudo[152701]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:32:27 compute-0 sudo[152824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugqfllsnnjrgpvdyeimpaowsufohkedd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009147.0485177-65-155868746217925/AnsiballZ_copy.py'
Nov 24 18:32:27 compute-0 sudo[152824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:28 compute-0 python3.9[152826]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009147.0485177-65-155868746217925/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=7518fc18a6b36988d98be0ee7f2c8b7779ca174f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:32:28 compute-0 sudo[152824]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:28 compute-0 sudo[152976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zehrmxjkdapbtdqpgugxvyizbnkjrgsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009148.2933528-109-154772898546278/AnsiballZ_file.py'
Nov 24 18:32:28 compute-0 sudo[152976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:28 compute-0 python3.9[152978]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:32:28 compute-0 sudo[152976]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:29 compute-0 sudo[153128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twyhqcqftfegxceeomvhualesmotkcdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009148.9440396-109-26038920674210/AnsiballZ_file.py'
Nov 24 18:32:29 compute-0 sudo[153128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:29 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v498: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:29 compute-0 ceph-mon[74927]: pgmap v498: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:29 compute-0 python3.9[153130]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:32:29 compute-0 sudo[153128]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:29 compute-0 sudo[153280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoiagyjowyhihyosgvnfmvqyzekmemwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009149.6092713-124-1261775699706/AnsiballZ_stat.py'
Nov 24 18:32:29 compute-0 sudo[153280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:30 compute-0 python3.9[153282]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:32:30 compute-0 sudo[153280]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:30 compute-0 sudo[153403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcqbnotahvknsnxsiejpfscdfgfebnye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009149.6092713-124-1261775699706/AnsiballZ_copy.py'
Nov 24 18:32:30 compute-0 sudo[153403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:30 compute-0 python3.9[153405]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009149.6092713-124-1261775699706/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=b798c7d4884914f8199c0298f01b39ef12806173 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:32:30 compute-0 sudo[153403]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:31 compute-0 sudo[153555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glwwypmtiwfspspzjfkktfbyzrstsgtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009150.8053813-124-191685444811499/AnsiballZ_stat.py'
Nov 24 18:32:31 compute-0 sudo[153555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:31 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v499: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:31 compute-0 python3.9[153557]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:32:31 compute-0 sudo[153555]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:31 compute-0 ceph-mon[74927]: pgmap v499: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:31 compute-0 sudo[153678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbhdgcxdafdhsxfudvutgwmlufoxtggi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009150.8053813-124-191685444811499/AnsiballZ_copy.py'
Nov 24 18:32:31 compute-0 sudo[153678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:31 compute-0 python3.9[153680]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009150.8053813-124-191685444811499/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=e002ea2e2d89648d7a0d696996ed799d0e5d34b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:32:31 compute-0 sudo[153678]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:32 compute-0 sudo[153830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuvldztcpmvhxmtzoozivxsgzincwrhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009151.982079-124-179531720915942/AnsiballZ_stat.py'
Nov 24 18:32:32 compute-0 sudo[153830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:32 compute-0 python3.9[153832]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:32:32 compute-0 sudo[153830]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:32 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:32:32 compute-0 sudo[153953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbkyljxesyslacydsuscusemozwrwcbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009151.982079-124-179531720915942/AnsiballZ_copy.py'
Nov 24 18:32:32 compute-0 sudo[153953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:32 compute-0 python3.9[153955]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009151.982079-124-179531720915942/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=f6526974e9bafe125505ea4c1e3ecfa5aecfb306 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:32:32 compute-0 sudo[153953]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:33 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v500: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:33 compute-0 ceph-mon[74927]: pgmap v500: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:33 compute-0 sudo[154105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acqycbjbxdnbxsermiotekjxlfxritmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009153.2219465-168-109888722413335/AnsiballZ_file.py'
Nov 24 18:32:33 compute-0 sudo[154105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:33 compute-0 python3.9[154107]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:32:33 compute-0 sudo[154105]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:34 compute-0 sudo[154257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgwonpwxoiaamyrhhzokercgfujaayxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009153.7914526-168-181788084965410/AnsiballZ_file.py'
Nov 24 18:32:34 compute-0 sudo[154257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:34 compute-0 python3.9[154259]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:32:34 compute-0 sudo[154257]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:32:34
Nov 24 18:32:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:32:34 compute-0 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 18:32:34 compute-0 ceph-mgr[75218]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', 'volumes', 'images', 'default.rgw.log', '.mgr', 'vms']
Nov 24 18:32:34 compute-0 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 18:32:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:32:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:32:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:32:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:32:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:32:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:32:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:32:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:32:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:32:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:32:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:32:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:32:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:32:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:32:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:32:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:32:34 compute-0 sudo[154409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptcdtjznivefwsjhzrdikrnukoqjrrsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009154.5091846-183-171031919124707/AnsiballZ_stat.py'
Nov 24 18:32:34 compute-0 sudo[154409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:34 compute-0 python3.9[154411]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:32:34 compute-0 sudo[154409]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:35 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v501: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:35 compute-0 sudo[154532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-novrtfcyzrufngqiidcjfloyilvuotcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009154.5091846-183-171031919124707/AnsiballZ_copy.py'
Nov 24 18:32:35 compute-0 sudo[154532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:35 compute-0 ceph-mon[74927]: pgmap v501: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:35 compute-0 python3.9[154534]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009154.5091846-183-171031919124707/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=705500fe9885935f2329f2ca970fd4743071d167 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:32:35 compute-0 sudo[154532]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:35 compute-0 sudo[154684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anxshtyzmgexjxcvrysejaeponukktla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009155.6244013-183-9032051815524/AnsiballZ_stat.py'
Nov 24 18:32:35 compute-0 sudo[154684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:36 compute-0 python3.9[154686]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:32:36 compute-0 sudo[154684]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:36 compute-0 sudo[154807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prsejlvpgklkyajntfudxdcexiljnzws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009155.6244013-183-9032051815524/AnsiballZ_copy.py'
Nov 24 18:32:36 compute-0 sudo[154807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:36 compute-0 python3.9[154809]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009155.6244013-183-9032051815524/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=e002ea2e2d89648d7a0d696996ed799d0e5d34b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:32:36 compute-0 sudo[154807]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:37 compute-0 sudo[154959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmtddzvjfsjtgnyqbbwrxhfmgekqjwuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009156.7364738-183-115519125593309/AnsiballZ_stat.py'
Nov 24 18:32:37 compute-0 sudo[154959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:37 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v502: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:37 compute-0 python3.9[154961]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:32:37 compute-0 sudo[154959]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:37 compute-0 ceph-mon[74927]: pgmap v502: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:32:37 compute-0 sudo[155082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbdowblndmyltdbrmxmffxylhnmarvls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009156.7364738-183-115519125593309/AnsiballZ_copy.py'
Nov 24 18:32:37 compute-0 sudo[155082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:37 compute-0 python3.9[155084]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009156.7364738-183-115519125593309/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=d63a741b9142b27415a97a0572bea2566e38144d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:32:37 compute-0 sudo[155082]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:38 compute-0 sudo[155234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nccwcudeyeasroradxnhdbmwxdtvwthl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009158.5636957-243-37523450458413/AnsiballZ_file.py'
Nov 24 18:32:38 compute-0 sudo[155234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:39 compute-0 python3.9[155236]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:32:39 compute-0 sudo[155234]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:39 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v503: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:39 compute-0 ceph-mon[74927]: pgmap v503: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:39 compute-0 sudo[155386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdbdhpcsxzevkjewpepfzlqfodcotvne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009159.2682621-251-213364431235913/AnsiballZ_stat.py'
Nov 24 18:32:39 compute-0 sudo[155386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:39 compute-0 python3.9[155388]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:32:39 compute-0 sudo[155386]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:40 compute-0 sudo[155509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwhwutapdglthyzfkulysphnzetknrpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009159.2682621-251-213364431235913/AnsiballZ_copy.py'
Nov 24 18:32:40 compute-0 sudo[155509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:40 compute-0 python3.9[155511]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009159.2682621-251-213364431235913/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=4453bc72f5dea8ea952ecd01786d1a0544923cc0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:32:40 compute-0 sudo[155509]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:40 compute-0 sudo[155661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-embmkfputozhspqiwlywmiwjxohprsjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009160.6530666-267-37060364843063/AnsiballZ_file.py'
Nov 24 18:32:40 compute-0 sudo[155661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:41 compute-0 python3.9[155663]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:32:41 compute-0 sudo[155661]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:41 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v504: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:41 compute-0 ceph-mon[74927]: pgmap v504: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:41 compute-0 sudo[155813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qudvwujnohaamczpawcsvmwwwknqurty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009161.3265781-275-9729059007512/AnsiballZ_stat.py'
Nov 24 18:32:41 compute-0 sudo[155813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:41 compute-0 python3.9[155815]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:32:41 compute-0 sudo[155813]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:42 compute-0 sudo[155936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifgtpzcjgqgpvetqlyjieuxuzliylilc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009161.3265781-275-9729059007512/AnsiballZ_copy.py'
Nov 24 18:32:42 compute-0 sudo[155936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:42 compute-0 python3.9[155938]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009161.3265781-275-9729059007512/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=4453bc72f5dea8ea952ecd01786d1a0544923cc0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:32:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:32:42 compute-0 sudo[155936]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:32:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:32:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:32:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:32:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:32:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:32:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:32:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:32:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:32:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:32:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:32:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:32:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 18:32:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:32:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:32:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:32:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 18:32:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:32:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 18:32:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:32:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:32:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:32:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 18:32:43 compute-0 sudo[156088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axbutzkzkucmtxvvbgvojkkycuuwiymf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009162.8121183-291-78732984266308/AnsiballZ_file.py'
Nov 24 18:32:43 compute-0 sudo[156088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:43 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v505: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:43 compute-0 ceph-mon[74927]: pgmap v505: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:43 compute-0 python3.9[156090]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:32:43 compute-0 sudo[156088]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:43 compute-0 sudo[156240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhflckblynkxpnmmobnjrkfoadeowakj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009163.512652-299-49964354913980/AnsiballZ_stat.py'
Nov 24 18:32:43 compute-0 sudo[156240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:44 compute-0 python3.9[156242]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:32:44 compute-0 sudo[156240]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:44 compute-0 sudo[156363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtjlfwyhtwbbndgauzorpqtfkqqwmtcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009163.512652-299-49964354913980/AnsiballZ_copy.py'
Nov 24 18:32:44 compute-0 sudo[156363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:44 compute-0 python3.9[156365]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009163.512652-299-49964354913980/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=4453bc72f5dea8ea952ecd01786d1a0544923cc0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:32:44 compute-0 sudo[156363]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:45 compute-0 sudo[156515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmqybuxkipkrgtoxrnglczmoasovbqty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009164.8490825-315-105134283432606/AnsiballZ_file.py'
Nov 24 18:32:45 compute-0 sudo[156515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:45 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v506: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:45 compute-0 ceph-mon[74927]: pgmap v506: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:45 compute-0 python3.9[156517]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:32:45 compute-0 sudo[156515]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:45 compute-0 sudo[156667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfeblsvaekzcsqubbkevqewygidrvaao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009165.6151438-323-272466191618071/AnsiballZ_stat.py'
Nov 24 18:32:45 compute-0 sudo[156667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:46 compute-0 python3.9[156669]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:32:46 compute-0 sudo[156667]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:46 compute-0 sudo[156790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwqqcbjgbgdfjwunjkkszqgvfbmvmxzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009165.6151438-323-272466191618071/AnsiballZ_copy.py'
Nov 24 18:32:46 compute-0 sudo[156790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:46 compute-0 python3.9[156792]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009165.6151438-323-272466191618071/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=4453bc72f5dea8ea952ecd01786d1a0544923cc0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:32:46 compute-0 sudo[156790]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:47 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v507: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:47 compute-0 sudo[156942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pobszxipfjbrubgynlanyncipwtpxdry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009166.9471161-339-229714644980766/AnsiballZ_file.py'
Nov 24 18:32:47 compute-0 sudo[156942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:47 compute-0 ceph-mon[74927]: pgmap v507: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:47 compute-0 python3.9[156944]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:32:47 compute-0 sudo[156942]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:32:48 compute-0 sudo[157094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kceygcmtdgeqpsicdiimxpakgafiizcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009167.7751353-347-101961344962667/AnsiballZ_stat.py'
Nov 24 18:32:48 compute-0 sudo[157094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:48 compute-0 python3.9[157096]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:32:48 compute-0 sudo[157094]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:48 compute-0 sudo[157217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cukexthbmmlnugbbzjdrcpstshtfahvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009167.7751353-347-101961344962667/AnsiballZ_copy.py'
Nov 24 18:32:48 compute-0 sudo[157217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:48 compute-0 python3.9[157219]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009167.7751353-347-101961344962667/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=4453bc72f5dea8ea952ecd01786d1a0544923cc0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:32:48 compute-0 sudo[157217]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:49 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v508: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:49 compute-0 sudo[157369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgxogeoaaodxsqdaobfnifzmfuhbsqho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009168.9840088-363-217577894375160/AnsiballZ_file.py'
Nov 24 18:32:49 compute-0 sudo[157369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:49 compute-0 ceph-mon[74927]: pgmap v508: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:49 compute-0 python3.9[157371]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:32:49 compute-0 sudo[157369]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:49 compute-0 sudo[157521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxpdkghrwcosqnaskyyxckjgbqvuwzqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009169.586473-371-113639924924673/AnsiballZ_stat.py'
Nov 24 18:32:49 compute-0 sudo[157521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:50 compute-0 python3.9[157523]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:32:50 compute-0 sudo[157521]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:50 compute-0 sudo[157644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyaosxtulzzyzidjjjocinxyprihudlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009169.586473-371-113639924924673/AnsiballZ_copy.py'
Nov 24 18:32:50 compute-0 sudo[157644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:50 compute-0 python3.9[157646]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009169.586473-371-113639924924673/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=4453bc72f5dea8ea952ecd01786d1a0544923cc0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:32:50 compute-0 sudo[157644]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:50 compute-0 sshd-session[151543]: Connection closed by 192.168.122.30 port 38264
Nov 24 18:32:50 compute-0 sshd-session[151540]: pam_unix(sshd:session): session closed for user zuul
Nov 24 18:32:50 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Nov 24 18:32:50 compute-0 systemd[1]: session-46.scope: Consumed 22.298s CPU time.
Nov 24 18:32:50 compute-0 systemd-logind[822]: Session 46 logged out. Waiting for processes to exit.
Nov 24 18:32:50 compute-0 systemd-logind[822]: Removed session 46.
Nov 24 18:32:51 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v509: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:51 compute-0 ceph-mon[74927]: pgmap v509: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:32:53 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v510: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:53 compute-0 ceph-mon[74927]: pgmap v510: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:55 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v511: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:55 compute-0 ceph-mon[74927]: pgmap v511: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:56 compute-0 sshd-session[157672]: Accepted publickey for zuul from 192.168.122.30 port 56244 ssh2: ECDSA SHA256:jrbRhTO0vQ5011EeK1ZbrK2vK+fcQyIDL9kUBqDHBYY
Nov 24 18:32:56 compute-0 systemd-logind[822]: New session 47 of user zuul.
Nov 24 18:32:56 compute-0 systemd[1]: Started Session 47 of User zuul.
Nov 24 18:32:56 compute-0 sshd-session[157672]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 18:32:57 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v512: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:57 compute-0 ceph-mon[74927]: pgmap v512: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:57 compute-0 sudo[157825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sztcxcjeulbklysacuynlqvllzaltffj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009176.6991599-22-191769154116861/AnsiballZ_file.py'
Nov 24 18:32:57 compute-0 sudo[157825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:57 compute-0 python3.9[157827]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:32:57 compute-0 sudo[157825]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:32:58 compute-0 sudo[157977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfosixenybjigdhjpskxsgmchiwlwleg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009177.697789-34-127937099787305/AnsiballZ_stat.py'
Nov 24 18:32:58 compute-0 sudo[157977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:58 compute-0 python3.9[157979]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:32:58 compute-0 sudo[157977]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:58 compute-0 sudo[158100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijarrayinpncdqvpoajoenglaritplna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009177.697789-34-127937099787305/AnsiballZ_copy.py'
Nov 24 18:32:58 compute-0 sudo[158100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:32:59 compute-0 python3.9[158102]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764009177.697789-34-127937099787305/.source.conf _original_basename=ceph.conf follow=False checksum=e6376665f4d651a92ab919b303c349cf96ae8bd0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:32:59 compute-0 sudo[158100]: pam_unix(sudo:session): session closed for user root
Nov 24 18:32:59 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v513: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:32:59 compute-0 ceph-mon[74927]: pgmap v513: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:00 compute-0 sudo[158252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwzekjjydpvhweqtmcgsszxgxbhtgcob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009179.7347126-34-70650416358500/AnsiballZ_stat.py'
Nov 24 18:33:00 compute-0 sudo[158252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:00 compute-0 python3.9[158254]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:33:00 compute-0 sudo[158252]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:00 compute-0 sudo[158375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkvnlvjeauoiprxzmxlmexgtqimzycht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009179.7347126-34-70650416358500/AnsiballZ_copy.py'
Nov 24 18:33:00 compute-0 sudo[158375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:00 compute-0 python3.9[158377]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764009179.7347126-34-70650416358500/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=da81228d7cc67f3a06b39ee156e276fa0a4ebf0e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:33:00 compute-0 sudo[158375]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:01 compute-0 sshd-session[157675]: Connection closed by 192.168.122.30 port 56244
Nov 24 18:33:01 compute-0 sshd-session[157672]: pam_unix(sshd:session): session closed for user zuul
Nov 24 18:33:01 compute-0 systemd[1]: session-47.scope: Deactivated successfully.
Nov 24 18:33:01 compute-0 systemd[1]: session-47.scope: Consumed 2.401s CPU time.
Nov 24 18:33:01 compute-0 systemd-logind[822]: Session 47 logged out. Waiting for processes to exit.
Nov 24 18:33:01 compute-0 systemd-logind[822]: Removed session 47.
Nov 24 18:33:01 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v514: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:01 compute-0 ceph-mon[74927]: pgmap v514: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:33:03 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v515: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:03 compute-0 ceph-mon[74927]: pgmap v515: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:33:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:33:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:33:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:33:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:33:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:33:05 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v516: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:05 compute-0 ceph-mon[74927]: pgmap v516: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:06 compute-0 sshd-session[158402]: Accepted publickey for zuul from 192.168.122.30 port 60402 ssh2: ECDSA SHA256:jrbRhTO0vQ5011EeK1ZbrK2vK+fcQyIDL9kUBqDHBYY
Nov 24 18:33:06 compute-0 systemd-logind[822]: New session 48 of user zuul.
Nov 24 18:33:06 compute-0 systemd[1]: Started Session 48 of User zuul.
Nov 24 18:33:06 compute-0 sshd-session[158402]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 18:33:07 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v517: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:07 compute-0 ceph-mon[74927]: pgmap v517: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:07 compute-0 python3.9[158555]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:33:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:33:08 compute-0 sudo[158709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqdvclweffrtylvpkcqxydseiobtrpms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009187.999466-34-212090873161209/AnsiballZ_file.py'
Nov 24 18:33:08 compute-0 sudo[158709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:08 compute-0 python3.9[158711]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:33:08 compute-0 sudo[158709]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:09 compute-0 sudo[158861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqkzxkeoptkrvjhsoxewxomujrnlypnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009188.8491201-34-162651788495143/AnsiballZ_file.py'
Nov 24 18:33:09 compute-0 sudo[158861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:09 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v518: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:09 compute-0 ceph-mon[74927]: pgmap v518: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:09 compute-0 python3.9[158863]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:33:09 compute-0 sudo[158861]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:10 compute-0 python3.9[159013]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:33:10 compute-0 sudo[159163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwdpkucnhhwpruvygpuhzxcxmjxdjuge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009190.4107552-57-47941987733904/AnsiballZ_seboolean.py'
Nov 24 18:33:10 compute-0 sudo[159163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:11 compute-0 python3.9[159165]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 24 18:33:11 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v519: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:11 compute-0 ceph-mon[74927]: pgmap v519: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:12 compute-0 sudo[159163]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:12 compute-0 sudo[159170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:33:12 compute-0 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 24 18:33:12 compute-0 sudo[159170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:33:12 compute-0 sudo[159170]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:12 compute-0 sudo[159198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:33:12 compute-0 sudo[159198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:33:12 compute-0 sudo[159198]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:12 compute-0 sudo[159244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:33:12 compute-0 sudo[159244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:33:12 compute-0 sudo[159244]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:33:12 compute-0 sudo[159269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 18:33:12 compute-0 sudo[159269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:33:13 compute-0 sudo[159269]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:13 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:33:13 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:33:13 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:33:13 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:33:13 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:33:13 compute-0 sudo[159450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heuazagxtyavbtvchwhmtcvezhwmefnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009192.7516363-67-133886974617243/AnsiballZ_setup.py'
Nov 24 18:33:13 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:33:13 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev c9729518-895f-4a87-acb8-4f0d81de12a8 does not exist
Nov 24 18:33:13 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 70078941-b5ca-4602-a0d5-ad3292efc7d6 does not exist
Nov 24 18:33:13 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev bf0e80ae-d9d1-4917-9520-4aa730585040 does not exist
Nov 24 18:33:13 compute-0 sudo[159450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:13 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 18:33:13 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:33:13 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 18:33:13 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:33:13 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:33:13 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:33:13 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:33:13 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:33:13 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:33:13 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:33:13 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:33:13 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:33:13 compute-0 sudo[159453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:33:13 compute-0 sudo[159453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:33:13 compute-0 sudo[159453]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:13 compute-0 sudo[159478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:33:13 compute-0 sudo[159478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:33:13 compute-0 sudo[159478]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:13 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v520: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:13 compute-0 sudo[159503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:33:13 compute-0 sudo[159503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:33:13 compute-0 sudo[159503]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:13 compute-0 sudo[159528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 18:33:13 compute-0 sudo[159528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:33:13 compute-0 python3.9[159452]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 18:33:13 compute-0 sudo[159450]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:13 compute-0 podman[159601]: 2025-11-24 18:33:13.616825919 +0000 UTC m=+0.034104447 container create 00ff2a7ea718990ee0b838e133fb566a6abcab6d3eb40fafc691b9fbedbaaa57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 18:33:13 compute-0 systemd[1]: Started libpod-conmon-00ff2a7ea718990ee0b838e133fb566a6abcab6d3eb40fafc691b9fbedbaaa57.scope.
Nov 24 18:33:13 compute-0 podman[159601]: 2025-11-24 18:33:13.601102605 +0000 UTC m=+0.018381133 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:33:13 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:33:13 compute-0 podman[159601]: 2025-11-24 18:33:13.714690507 +0000 UTC m=+0.131969035 container init 00ff2a7ea718990ee0b838e133fb566a6abcab6d3eb40fafc691b9fbedbaaa57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:33:13 compute-0 podman[159601]: 2025-11-24 18:33:13.7216011 +0000 UTC m=+0.138879628 container start 00ff2a7ea718990ee0b838e133fb566a6abcab6d3eb40fafc691b9fbedbaaa57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_goldwasser, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Nov 24 18:33:13 compute-0 podman[159601]: 2025-11-24 18:33:13.724835942 +0000 UTC m=+0.142114470 container attach 00ff2a7ea718990ee0b838e133fb566a6abcab6d3eb40fafc691b9fbedbaaa57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_goldwasser, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 18:33:13 compute-0 amazing_goldwasser[159617]: 167 167
Nov 24 18:33:13 compute-0 systemd[1]: libpod-00ff2a7ea718990ee0b838e133fb566a6abcab6d3eb40fafc691b9fbedbaaa57.scope: Deactivated successfully.
Nov 24 18:33:13 compute-0 podman[159601]: 2025-11-24 18:33:13.727514759 +0000 UTC m=+0.144793287 container died 00ff2a7ea718990ee0b838e133fb566a6abcab6d3eb40fafc691b9fbedbaaa57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 24 18:33:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-22cb7840db99737ea2ffa142dffa520d8759d60fe1c5d0d13267f0dbb93278f6-merged.mount: Deactivated successfully.
Nov 24 18:33:13 compute-0 podman[159601]: 2025-11-24 18:33:13.775442762 +0000 UTC m=+0.192721310 container remove 00ff2a7ea718990ee0b838e133fb566a6abcab6d3eb40fafc691b9fbedbaaa57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Nov 24 18:33:13 compute-0 systemd[1]: libpod-conmon-00ff2a7ea718990ee0b838e133fb566a6abcab6d3eb40fafc691b9fbedbaaa57.scope: Deactivated successfully.
Nov 24 18:33:13 compute-0 podman[159664]: 2025-11-24 18:33:13.925985732 +0000 UTC m=+0.036251201 container create 5a7bbce6c6340dba2df1fd221a24cbee0bd4b99f05de42c394ad0ec910324319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_villani, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 24 18:33:13 compute-0 systemd[1]: Started libpod-conmon-5a7bbce6c6340dba2df1fd221a24cbee0bd4b99f05de42c394ad0ec910324319.scope.
Nov 24 18:33:14 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:33:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b982657825c3c6b6494c9896a8448cb8af5db784e2e7b664f9c15d4e196392f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:33:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b982657825c3c6b6494c9896a8448cb8af5db784e2e7b664f9c15d4e196392f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:33:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b982657825c3c6b6494c9896a8448cb8af5db784e2e7b664f9c15d4e196392f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:33:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b982657825c3c6b6494c9896a8448cb8af5db784e2e7b664f9c15d4e196392f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:33:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b982657825c3c6b6494c9896a8448cb8af5db784e2e7b664f9c15d4e196392f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:33:14 compute-0 podman[159664]: 2025-11-24 18:33:13.910390721 +0000 UTC m=+0.020656190 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:33:14 compute-0 podman[159664]: 2025-11-24 18:33:14.013805478 +0000 UTC m=+0.124070947 container init 5a7bbce6c6340dba2df1fd221a24cbee0bd4b99f05de42c394ad0ec910324319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 18:33:14 compute-0 sudo[159734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veabxpozqdohckewqnilxtelnheijipk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009192.7516363-67-133886974617243/AnsiballZ_dnf.py'
Nov 24 18:33:14 compute-0 podman[159664]: 2025-11-24 18:33:14.020623159 +0000 UTC m=+0.130888608 container start 5a7bbce6c6340dba2df1fd221a24cbee0bd4b99f05de42c394ad0ec910324319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:33:14 compute-0 sudo[159734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:14 compute-0 podman[159664]: 2025-11-24 18:33:14.024194338 +0000 UTC m=+0.134459867 container attach 5a7bbce6c6340dba2df1fd221a24cbee0bd4b99f05de42c394ad0ec910324319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_villani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 18:33:14 compute-0 ceph-mon[74927]: pgmap v520: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:14 compute-0 python3.9[159738]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 18:33:15 compute-0 upbeat_villani[159705]: --> passed data devices: 0 physical, 3 LVM
Nov 24 18:33:15 compute-0 upbeat_villani[159705]: --> relative data size: 1.0
Nov 24 18:33:15 compute-0 upbeat_villani[159705]: --> All data devices are unavailable
Nov 24 18:33:15 compute-0 systemd[1]: libpod-5a7bbce6c6340dba2df1fd221a24cbee0bd4b99f05de42c394ad0ec910324319.scope: Deactivated successfully.
Nov 24 18:33:15 compute-0 podman[159664]: 2025-11-24 18:33:15.06925197 +0000 UTC m=+1.179517419 container died 5a7bbce6c6340dba2df1fd221a24cbee0bd4b99f05de42c394ad0ec910324319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_villani, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 18:33:15 compute-0 systemd[1]: libpod-5a7bbce6c6340dba2df1fd221a24cbee0bd4b99f05de42c394ad0ec910324319.scope: Consumed 1.003s CPU time.
Nov 24 18:33:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b982657825c3c6b6494c9896a8448cb8af5db784e2e7b664f9c15d4e196392f-merged.mount: Deactivated successfully.
Nov 24 18:33:15 compute-0 podman[159664]: 2025-11-24 18:33:15.123085391 +0000 UTC m=+1.233350840 container remove 5a7bbce6c6340dba2df1fd221a24cbee0bd4b99f05de42c394ad0ec910324319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:33:15 compute-0 systemd[1]: libpod-conmon-5a7bbce6c6340dba2df1fd221a24cbee0bd4b99f05de42c394ad0ec910324319.scope: Deactivated successfully.
Nov 24 18:33:15 compute-0 sudo[159528]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:15 compute-0 sudo[159778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:33:15 compute-0 sudo[159778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:33:15 compute-0 sudo[159778]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:15 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v521: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:15 compute-0 sudo[159803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:33:15 compute-0 sudo[159803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:33:15 compute-0 sudo[159803]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:15 compute-0 ceph-mon[74927]: pgmap v521: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:15 compute-0 sudo[159828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:33:15 compute-0 sudo[159828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:33:15 compute-0 sudo[159828]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:15 compute-0 sudo[159853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- lvm list --format json
Nov 24 18:33:15 compute-0 sudo[159853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:33:15 compute-0 sudo[159734]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:15 compute-0 podman[159943]: 2025-11-24 18:33:15.72796623 +0000 UTC m=+0.058199503 container create b6bb91aa4e5296d652eb9c37d2b6b8bfd83b20d69c7e19fe8125b1f48ed639c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_rubin, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 18:33:15 compute-0 systemd[1]: Started libpod-conmon-b6bb91aa4e5296d652eb9c37d2b6b8bfd83b20d69c7e19fe8125b1f48ed639c2.scope.
Nov 24 18:33:15 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:33:15 compute-0 podman[159943]: 2025-11-24 18:33:15.706043979 +0000 UTC m=+0.036277332 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:33:15 compute-0 podman[159943]: 2025-11-24 18:33:15.811273872 +0000 UTC m=+0.141507155 container init b6bb91aa4e5296d652eb9c37d2b6b8bfd83b20d69c7e19fe8125b1f48ed639c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 18:33:15 compute-0 podman[159943]: 2025-11-24 18:33:15.816532484 +0000 UTC m=+0.146765747 container start b6bb91aa4e5296d652eb9c37d2b6b8bfd83b20d69c7e19fe8125b1f48ed639c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_rubin, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 24 18:33:15 compute-0 podman[159943]: 2025-11-24 18:33:15.819290623 +0000 UTC m=+0.149523926 container attach b6bb91aa4e5296d652eb9c37d2b6b8bfd83b20d69c7e19fe8125b1f48ed639c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 18:33:15 compute-0 angry_rubin[160010]: 167 167
Nov 24 18:33:15 compute-0 systemd[1]: libpod-b6bb91aa4e5296d652eb9c37d2b6b8bfd83b20d69c7e19fe8125b1f48ed639c2.scope: Deactivated successfully.
Nov 24 18:33:15 compute-0 podman[159943]: 2025-11-24 18:33:15.822511234 +0000 UTC m=+0.152744497 container died b6bb91aa4e5296d652eb9c37d2b6b8bfd83b20d69c7e19fe8125b1f48ed639c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_rubin, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:33:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1952e9e19e9069218ee51eb33abc0a3c68fd9a2faa7eb25d7f06e2a72bfe9e0-merged.mount: Deactivated successfully.
Nov 24 18:33:15 compute-0 podman[159943]: 2025-11-24 18:33:15.862105978 +0000 UTC m=+0.192339251 container remove b6bb91aa4e5296d652eb9c37d2b6b8bfd83b20d69c7e19fe8125b1f48ed639c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_rubin, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 18:33:15 compute-0 systemd[1]: libpod-conmon-b6bb91aa4e5296d652eb9c37d2b6b8bfd83b20d69c7e19fe8125b1f48ed639c2.scope: Deactivated successfully.
Nov 24 18:33:16 compute-0 podman[160035]: 2025-11-24 18:33:16.020195488 +0000 UTC m=+0.042725704 container create 131ecdd2bde7ba6ab243c566cb7d0312d2c766029fd9c56d2e0939e69dcf9806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_spence, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 18:33:16 compute-0 systemd[1]: Started libpod-conmon-131ecdd2bde7ba6ab243c566cb7d0312d2c766029fd9c56d2e0939e69dcf9806.scope.
Nov 24 18:33:16 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:33:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3848413bf1ba806ff64ad79c8351e588a3fbbb2c6c181df923039169970b08d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:33:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3848413bf1ba806ff64ad79c8351e588a3fbbb2c6c181df923039169970b08d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:33:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3848413bf1ba806ff64ad79c8351e588a3fbbb2c6c181df923039169970b08d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:33:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3848413bf1ba806ff64ad79c8351e588a3fbbb2c6c181df923039169970b08d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:33:16 compute-0 podman[160035]: 2025-11-24 18:33:16.00197848 +0000 UTC m=+0.024508746 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:33:16 compute-0 podman[160035]: 2025-11-24 18:33:16.109610093 +0000 UTC m=+0.132140349 container init 131ecdd2bde7ba6ab243c566cb7d0312d2c766029fd9c56d2e0939e69dcf9806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_spence, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:33:16 compute-0 podman[160035]: 2025-11-24 18:33:16.121169633 +0000 UTC m=+0.143699879 container start 131ecdd2bde7ba6ab243c566cb7d0312d2c766029fd9c56d2e0939e69dcf9806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_spence, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 24 18:33:16 compute-0 podman[160035]: 2025-11-24 18:33:16.12463968 +0000 UTC m=+0.147169916 container attach 131ecdd2bde7ba6ab243c566cb7d0312d2c766029fd9c56d2e0939e69dcf9806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 24 18:33:16 compute-0 sudo[160130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owipufnwvjxtfjtulapixcwaccokthzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009195.6938527-79-96982883363411/AnsiballZ_systemd.py'
Nov 24 18:33:16 compute-0 sudo[160130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:16 compute-0 python3.9[160132]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 18:33:16 compute-0 objective_spence[160052]: {
Nov 24 18:33:16 compute-0 objective_spence[160052]:     "0": [
Nov 24 18:33:16 compute-0 objective_spence[160052]:         {
Nov 24 18:33:16 compute-0 objective_spence[160052]:             "devices": [
Nov 24 18:33:16 compute-0 objective_spence[160052]:                 "/dev/loop3"
Nov 24 18:33:16 compute-0 objective_spence[160052]:             ],
Nov 24 18:33:16 compute-0 objective_spence[160052]:             "lv_name": "ceph_lv0",
Nov 24 18:33:16 compute-0 objective_spence[160052]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:33:16 compute-0 objective_spence[160052]:             "lv_size": "21470642176",
Nov 24 18:33:16 compute-0 objective_spence[160052]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:33:16 compute-0 objective_spence[160052]:             "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:33:16 compute-0 objective_spence[160052]:             "name": "ceph_lv0",
Nov 24 18:33:16 compute-0 objective_spence[160052]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:33:16 compute-0 objective_spence[160052]:             "tags": {
Nov 24 18:33:16 compute-0 objective_spence[160052]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:33:16 compute-0 objective_spence[160052]:                 "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:33:16 compute-0 objective_spence[160052]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:33:16 compute-0 objective_spence[160052]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:33:16 compute-0 objective_spence[160052]:                 "ceph.cluster_name": "ceph",
Nov 24 18:33:16 compute-0 objective_spence[160052]:                 "ceph.crush_device_class": "",
Nov 24 18:33:16 compute-0 objective_spence[160052]:                 "ceph.encrypted": "0",
Nov 24 18:33:16 compute-0 objective_spence[160052]:                 "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:33:16 compute-0 objective_spence[160052]:                 "ceph.osd_id": "0",
Nov 24 18:33:16 compute-0 objective_spence[160052]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:33:16 compute-0 objective_spence[160052]:                 "ceph.type": "block",
Nov 24 18:33:16 compute-0 objective_spence[160052]:                 "ceph.vdo": "0"
Nov 24 18:33:16 compute-0 objective_spence[160052]:             },
Nov 24 18:33:16 compute-0 objective_spence[160052]:             "type": "block",
Nov 24 18:33:16 compute-0 objective_spence[160052]:             "vg_name": "ceph_vg0"
Nov 24 18:33:16 compute-0 objective_spence[160052]:         }
Nov 24 18:33:16 compute-0 objective_spence[160052]:     ],
Nov 24 18:33:16 compute-0 objective_spence[160052]:     "1": [
Nov 24 18:33:16 compute-0 objective_spence[160052]:         {
Nov 24 18:33:16 compute-0 objective_spence[160052]:             "devices": [
Nov 24 18:33:16 compute-0 objective_spence[160052]:                 "/dev/loop4"
Nov 24 18:33:16 compute-0 objective_spence[160052]:             ],
Nov 24 18:33:16 compute-0 objective_spence[160052]:             "lv_name": "ceph_lv1",
Nov 24 18:33:16 compute-0 objective_spence[160052]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:33:16 compute-0 objective_spence[160052]:             "lv_size": "21470642176",
Nov 24 18:33:16 compute-0 objective_spence[160052]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:33:16 compute-0 objective_spence[160052]:             "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:33:16 compute-0 objective_spence[160052]:             "name": "ceph_lv1",
Nov 24 18:33:16 compute-0 objective_spence[160052]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:33:16 compute-0 objective_spence[160052]:             "tags": {
Nov 24 18:33:16 compute-0 objective_spence[160052]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:33:16 compute-0 objective_spence[160052]:                 "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:33:16 compute-0 objective_spence[160052]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:33:16 compute-0 objective_spence[160052]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:33:16 compute-0 objective_spence[160052]:                 "ceph.cluster_name": "ceph",
Nov 24 18:33:16 compute-0 objective_spence[160052]:                 "ceph.crush_device_class": "",
Nov 24 18:33:16 compute-0 objective_spence[160052]:                 "ceph.encrypted": "0",
Nov 24 18:33:16 compute-0 objective_spence[160052]:                 "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:33:16 compute-0 objective_spence[160052]:                 "ceph.osd_id": "1",
Nov 24 18:33:16 compute-0 objective_spence[160052]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:33:16 compute-0 objective_spence[160052]:                 "ceph.type": "block",
Nov 24 18:33:16 compute-0 objective_spence[160052]:                 "ceph.vdo": "0"
Nov 24 18:33:16 compute-0 objective_spence[160052]:             },
Nov 24 18:33:16 compute-0 objective_spence[160052]:             "type": "block",
Nov 24 18:33:16 compute-0 objective_spence[160052]:             "vg_name": "ceph_vg1"
Nov 24 18:33:16 compute-0 objective_spence[160052]:         }
Nov 24 18:33:16 compute-0 objective_spence[160052]:     ],
Nov 24 18:33:16 compute-0 objective_spence[160052]:     "2": [
Nov 24 18:33:16 compute-0 objective_spence[160052]:         {
Nov 24 18:33:16 compute-0 objective_spence[160052]:             "devices": [
Nov 24 18:33:16 compute-0 objective_spence[160052]:                 "/dev/loop5"
Nov 24 18:33:16 compute-0 objective_spence[160052]:             ],
Nov 24 18:33:16 compute-0 objective_spence[160052]:             "lv_name": "ceph_lv2",
Nov 24 18:33:16 compute-0 objective_spence[160052]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:33:16 compute-0 objective_spence[160052]:             "lv_size": "21470642176",
Nov 24 18:33:16 compute-0 objective_spence[160052]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:33:16 compute-0 objective_spence[160052]:             "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:33:16 compute-0 objective_spence[160052]:             "name": "ceph_lv2",
Nov 24 18:33:16 compute-0 objective_spence[160052]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:33:16 compute-0 objective_spence[160052]:             "tags": {
Nov 24 18:33:16 compute-0 objective_spence[160052]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:33:16 compute-0 objective_spence[160052]:                 "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:33:16 compute-0 objective_spence[160052]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:33:16 compute-0 objective_spence[160052]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:33:16 compute-0 objective_spence[160052]:                 "ceph.cluster_name": "ceph",
Nov 24 18:33:16 compute-0 objective_spence[160052]:                 "ceph.crush_device_class": "",
Nov 24 18:33:16 compute-0 objective_spence[160052]:                 "ceph.encrypted": "0",
Nov 24 18:33:16 compute-0 objective_spence[160052]:                 "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:33:16 compute-0 objective_spence[160052]:                 "ceph.osd_id": "2",
Nov 24 18:33:16 compute-0 objective_spence[160052]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:33:16 compute-0 objective_spence[160052]:                 "ceph.type": "block",
Nov 24 18:33:16 compute-0 objective_spence[160052]:                 "ceph.vdo": "0"
Nov 24 18:33:16 compute-0 objective_spence[160052]:             },
Nov 24 18:33:16 compute-0 objective_spence[160052]:             "type": "block",
Nov 24 18:33:16 compute-0 objective_spence[160052]:             "vg_name": "ceph_vg2"
Nov 24 18:33:16 compute-0 objective_spence[160052]:         }
Nov 24 18:33:16 compute-0 objective_spence[160052]:     ]
Nov 24 18:33:16 compute-0 objective_spence[160052]: }
Nov 24 18:33:16 compute-0 systemd[1]: libpod-131ecdd2bde7ba6ab243c566cb7d0312d2c766029fd9c56d2e0939e69dcf9806.scope: Deactivated successfully.
Nov 24 18:33:16 compute-0 podman[160035]: 2025-11-24 18:33:16.880765275 +0000 UTC m=+0.903295491 container died 131ecdd2bde7ba6ab243c566cb7d0312d2c766029fd9c56d2e0939e69dcf9806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_spence, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 18:33:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-3848413bf1ba806ff64ad79c8351e588a3fbbb2c6c181df923039169970b08d8-merged.mount: Deactivated successfully.
Nov 24 18:33:16 compute-0 podman[160035]: 2025-11-24 18:33:16.937547961 +0000 UTC m=+0.960078167 container remove 131ecdd2bde7ba6ab243c566cb7d0312d2c766029fd9c56d2e0939e69dcf9806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_spence, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 24 18:33:16 compute-0 systemd[1]: libpod-conmon-131ecdd2bde7ba6ab243c566cb7d0312d2c766029fd9c56d2e0939e69dcf9806.scope: Deactivated successfully.
Nov 24 18:33:16 compute-0 sudo[159853]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:17 compute-0 sudo[160148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:33:17 compute-0 sudo[160148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:33:17 compute-0 sudo[160148]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:17 compute-0 sudo[160173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:33:17 compute-0 sudo[160173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:33:17 compute-0 sudo[160173]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:17 compute-0 sudo[160198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:33:17 compute-0 sudo[160198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:33:17 compute-0 sudo[160198]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:17 compute-0 sudo[160223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- raw list --format json
Nov 24 18:33:17 compute-0 sudo[160223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:33:17 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v522: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:17 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Nov 24 18:33:17 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:33:17.294878) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 18:33:17 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Nov 24 18:33:17 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009197294936, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2040, "num_deletes": 251, "total_data_size": 3487867, "memory_usage": 3549632, "flush_reason": "Manual Compaction"}
Nov 24 18:33:17 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Nov 24 18:33:17 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009197312121, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3412733, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9723, "largest_seqno": 11762, "table_properties": {"data_size": 3403444, "index_size": 5911, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17818, "raw_average_key_size": 19, "raw_value_size": 3385063, "raw_average_value_size": 3695, "num_data_blocks": 268, "num_entries": 916, "num_filter_entries": 916, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008966, "oldest_key_time": 1764008966, "file_creation_time": 1764009197, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Nov 24 18:33:17 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 17285 microseconds, and 8081 cpu microseconds.
Nov 24 18:33:17 compute-0 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 18:33:17 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:33:17.312163) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3412733 bytes OK
Nov 24 18:33:17 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:33:17.312181) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Nov 24 18:33:17 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:33:17.313455) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Nov 24 18:33:17 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:33:17.313468) EVENT_LOG_v1 {"time_micros": 1764009197313464, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 18:33:17 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:33:17.313486) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 18:33:17 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3479360, prev total WAL file size 3479360, number of live WAL files 2.
Nov 24 18:33:17 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:33:17 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:33:17.314665) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Nov 24 18:33:17 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 18:33:17 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3332KB)], [26(5999KB)]
Nov 24 18:33:17 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009197314693, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9555960, "oldest_snapshot_seqno": -1}
Nov 24 18:33:17 compute-0 ceph-mon[74927]: pgmap v522: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:17 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3714 keys, 7820080 bytes, temperature: kUnknown
Nov 24 18:33:17 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009197356997, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 7820080, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7791670, "index_size": 17996, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9349, "raw_key_size": 89270, "raw_average_key_size": 24, "raw_value_size": 7721053, "raw_average_value_size": 2078, "num_data_blocks": 779, "num_entries": 3714, "num_filter_entries": 3714, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008324, "oldest_key_time": 0, "file_creation_time": 1764009197, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Nov 24 18:33:17 compute-0 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 18:33:17 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:33:17.357249) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 7820080 bytes
Nov 24 18:33:17 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:33:17.358580) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 225.5 rd, 184.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 5.9 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(5.1) write-amplify(2.3) OK, records in: 4228, records dropped: 514 output_compression: NoCompression
Nov 24 18:33:17 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:33:17.358601) EVENT_LOG_v1 {"time_micros": 1764009197358591, "job": 10, "event": "compaction_finished", "compaction_time_micros": 42379, "compaction_time_cpu_micros": 17150, "output_level": 6, "num_output_files": 1, "total_output_size": 7820080, "num_input_records": 4228, "num_output_records": 3714, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 18:33:17 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:33:17 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009197359412, "job": 10, "event": "table_file_deletion", "file_number": 28}
Nov 24 18:33:17 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:33:17 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009197360806, "job": 10, "event": "table_file_deletion", "file_number": 26}
Nov 24 18:33:17 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:33:17.314608) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:33:17 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:33:17.360858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:33:17 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:33:17.360864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:33:17 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:33:17.360867) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:33:17 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:33:17.360870) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:33:17 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:33:17.361260) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:33:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:33:17 compute-0 podman[160288]: 2025-11-24 18:33:17.598369014 +0000 UTC m=+0.046716314 container create d5da9735b4016ecfac5ff076c9351cd63c44e2b178b24336bcf8de7c49857437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:33:17 compute-0 systemd[1]: Started libpod-conmon-d5da9735b4016ecfac5ff076c9351cd63c44e2b178b24336bcf8de7c49857437.scope.
Nov 24 18:33:17 compute-0 podman[160288]: 2025-11-24 18:33:17.572432893 +0000 UTC m=+0.020780253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:33:17 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:33:17 compute-0 podman[160288]: 2025-11-24 18:33:17.683108422 +0000 UTC m=+0.131455682 container init d5da9735b4016ecfac5ff076c9351cd63c44e2b178b24336bcf8de7c49857437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 18:33:17 compute-0 podman[160288]: 2025-11-24 18:33:17.694246842 +0000 UTC m=+0.142594112 container start d5da9735b4016ecfac5ff076c9351cd63c44e2b178b24336bcf8de7c49857437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haibt, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 18:33:17 compute-0 podman[160288]: 2025-11-24 18:33:17.697535724 +0000 UTC m=+0.145882994 container attach d5da9735b4016ecfac5ff076c9351cd63c44e2b178b24336bcf8de7c49857437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:33:17 compute-0 zealous_haibt[160304]: 167 167
Nov 24 18:33:17 compute-0 systemd[1]: libpod-d5da9735b4016ecfac5ff076c9351cd63c44e2b178b24336bcf8de7c49857437.scope: Deactivated successfully.
Nov 24 18:33:17 compute-0 podman[160288]: 2025-11-24 18:33:17.700444777 +0000 UTC m=+0.148792097 container died d5da9735b4016ecfac5ff076c9351cd63c44e2b178b24336bcf8de7c49857437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haibt, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:33:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-6353e69641e3e22060e3494312397b96b0064453e408941b77fd5bf106136d07-merged.mount: Deactivated successfully.
Nov 24 18:33:17 compute-0 podman[160288]: 2025-11-24 18:33:17.740173325 +0000 UTC m=+0.188520605 container remove d5da9735b4016ecfac5ff076c9351cd63c44e2b178b24336bcf8de7c49857437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 18:33:17 compute-0 systemd[1]: libpod-conmon-d5da9735b4016ecfac5ff076c9351cd63c44e2b178b24336bcf8de7c49857437.scope: Deactivated successfully.
Nov 24 18:33:17 compute-0 sudo[160130]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:17 compute-0 podman[160329]: 2025-11-24 18:33:17.891652899 +0000 UTC m=+0.050121760 container create 679499359f3ac5dbed96ff64eac0675a1c20e6e4ba5bde45451656a918b523e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_chatterjee, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:33:17 compute-0 systemd[1]: Started libpod-conmon-679499359f3ac5dbed96ff64eac0675a1c20e6e4ba5bde45451656a918b523e4.scope.
Nov 24 18:33:17 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:33:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af9c72e2e7a7ff53222e45a2eb1d1d3f35ad4465ef617b4d3606984079a7601d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:33:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af9c72e2e7a7ff53222e45a2eb1d1d3f35ad4465ef617b4d3606984079a7601d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:33:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af9c72e2e7a7ff53222e45a2eb1d1d3f35ad4465ef617b4d3606984079a7601d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:33:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af9c72e2e7a7ff53222e45a2eb1d1d3f35ad4465ef617b4d3606984079a7601d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:33:17 compute-0 podman[160329]: 2025-11-24 18:33:17.873075492 +0000 UTC m=+0.031544383 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:33:17 compute-0 podman[160329]: 2025-11-24 18:33:17.968150019 +0000 UTC m=+0.126618890 container init 679499359f3ac5dbed96ff64eac0675a1c20e6e4ba5bde45451656a918b523e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_chatterjee, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Nov 24 18:33:17 compute-0 podman[160329]: 2025-11-24 18:33:17.978213642 +0000 UTC m=+0.136682513 container start 679499359f3ac5dbed96ff64eac0675a1c20e6e4ba5bde45451656a918b523e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_chatterjee, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:33:17 compute-0 podman[160329]: 2025-11-24 18:33:17.980947411 +0000 UTC m=+0.139416282 container attach 679499359f3ac5dbed96ff64eac0675a1c20e6e4ba5bde45451656a918b523e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:33:18 compute-0 sudo[160499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqknfbczxjbffndphlufrtynfbdmjvme ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764009198.0729573-87-79897027097906/AnsiballZ_edpm_nftables_snippet.py'
Nov 24 18:33:18 compute-0 sudo[160499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:18 compute-0 python3[160501]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 24 18:33:18 compute-0 sudo[160499]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:18 compute-0 priceless_chatterjee[160369]: {
Nov 24 18:33:19 compute-0 priceless_chatterjee[160369]:     "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 18:33:19 compute-0 priceless_chatterjee[160369]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:33:19 compute-0 priceless_chatterjee[160369]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 18:33:19 compute-0 priceless_chatterjee[160369]:         "osd_id": 0,
Nov 24 18:33:19 compute-0 priceless_chatterjee[160369]:         "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:33:19 compute-0 priceless_chatterjee[160369]:         "type": "bluestore"
Nov 24 18:33:19 compute-0 priceless_chatterjee[160369]:     },
Nov 24 18:33:19 compute-0 priceless_chatterjee[160369]:     "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 18:33:19 compute-0 priceless_chatterjee[160369]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:33:19 compute-0 priceless_chatterjee[160369]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 18:33:19 compute-0 priceless_chatterjee[160369]:         "osd_id": 1,
Nov 24 18:33:19 compute-0 priceless_chatterjee[160369]:         "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:33:19 compute-0 priceless_chatterjee[160369]:         "type": "bluestore"
Nov 24 18:33:19 compute-0 priceless_chatterjee[160369]:     },
Nov 24 18:33:19 compute-0 priceless_chatterjee[160369]:     "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 18:33:19 compute-0 priceless_chatterjee[160369]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:33:19 compute-0 priceless_chatterjee[160369]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 18:33:19 compute-0 priceless_chatterjee[160369]:         "osd_id": 2,
Nov 24 18:33:19 compute-0 priceless_chatterjee[160369]:         "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:33:19 compute-0 priceless_chatterjee[160369]:         "type": "bluestore"
Nov 24 18:33:19 compute-0 priceless_chatterjee[160369]:     }
Nov 24 18:33:19 compute-0 priceless_chatterjee[160369]: }
Nov 24 18:33:19 compute-0 systemd[1]: libpod-679499359f3ac5dbed96ff64eac0675a1c20e6e4ba5bde45451656a918b523e4.scope: Deactivated successfully.
Nov 24 18:33:19 compute-0 systemd[1]: libpod-679499359f3ac5dbed96ff64eac0675a1c20e6e4ba5bde45451656a918b523e4.scope: Consumed 1.046s CPU time.
Nov 24 18:33:19 compute-0 conmon[160369]: conmon 679499359f3ac5dbed96 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-679499359f3ac5dbed96ff64eac0675a1c20e6e4ba5bde45451656a918b523e4.scope/container/memory.events
Nov 24 18:33:19 compute-0 podman[160329]: 2025-11-24 18:33:19.022161415 +0000 UTC m=+1.180630286 container died 679499359f3ac5dbed96ff64eac0675a1c20e6e4ba5bde45451656a918b523e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_chatterjee, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 18:33:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-af9c72e2e7a7ff53222e45a2eb1d1d3f35ad4465ef617b4d3606984079a7601d-merged.mount: Deactivated successfully.
Nov 24 18:33:19 compute-0 podman[160329]: 2025-11-24 18:33:19.072701674 +0000 UTC m=+1.231170525 container remove 679499359f3ac5dbed96ff64eac0675a1c20e6e4ba5bde45451656a918b523e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:33:19 compute-0 systemd[1]: libpod-conmon-679499359f3ac5dbed96ff64eac0675a1c20e6e4ba5bde45451656a918b523e4.scope: Deactivated successfully.
Nov 24 18:33:19 compute-0 sudo[160223]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:33:19 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:33:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:33:19 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:33:19 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 6d07fda0-cbdf-42da-8a6f-170c452d6dfe does not exist
Nov 24 18:33:19 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 3bab2517-246b-4fbf-b210-69d8b45c0fbf does not exist
Nov 24 18:33:19 compute-0 sudo[160618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:33:19 compute-0 sudo[160618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:33:19 compute-0 sudo[160618]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:19 compute-0 sudo[160666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:33:19 compute-0 sudo[160666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:33:19 compute-0 sudo[160666]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:19 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v523: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:19 compute-0 sudo[160741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwlfejalupkgvdzqjjdhabuhkqrvzfzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009199.0544434-96-117571551219233/AnsiballZ_file.py'
Nov 24 18:33:19 compute-0 sudo[160741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:19 compute-0 python3.9[160743]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:33:19 compute-0 sudo[160741]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:20 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:33:20 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:33:20 compute-0 ceph-mon[74927]: pgmap v523: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:20 compute-0 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 18:33:20 compute-0 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 18:33:20 compute-0 sudo[160894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcutciznqlzwtxkoztfkuojahetxjvpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009199.7308373-104-52476477122138/AnsiballZ_stat.py'
Nov 24 18:33:20 compute-0 sudo[160894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:20 compute-0 python3.9[160896]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:33:20 compute-0 sudo[160894]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:20 compute-0 sudo[160972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tysekbweklormoozeuwyxquetnolnphd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009199.7308373-104-52476477122138/AnsiballZ_file.py'
Nov 24 18:33:20 compute-0 sudo[160972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:20 compute-0 python3.9[160974]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:33:20 compute-0 sudo[160972]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:21 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v524: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:21 compute-0 ceph-mon[74927]: pgmap v524: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:21 compute-0 sudo[161124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpcfcyjtvsoqgfngtlzcswyzzhfsgwrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009201.070257-116-233311294680292/AnsiballZ_stat.py'
Nov 24 18:33:21 compute-0 sudo[161124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:21 compute-0 python3.9[161126]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:33:21 compute-0 sudo[161124]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:21 compute-0 sudo[161202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsyxwqaubtpfldtepoaqjclxolbaxptt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009201.070257-116-233311294680292/AnsiballZ_file.py'
Nov 24 18:33:21 compute-0 sudo[161202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:21 compute-0 python3.9[161204]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.4_u6k5rv recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:33:22 compute-0 sudo[161202]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:22 compute-0 sudo[161354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqhonszvfdvakyyxgiwyogwvsbetpqvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009202.1237369-128-49877428429427/AnsiballZ_stat.py'
Nov 24 18:33:22 compute-0 sudo[161354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:22 compute-0 python3.9[161356]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:33:22 compute-0 sudo[161354]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:33:22 compute-0 sudo[161432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcdusfldagfxwhkkgrllzfqgqctahhgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009202.1237369-128-49877428429427/AnsiballZ_file.py'
Nov 24 18:33:22 compute-0 sudo[161432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:22 compute-0 python3.9[161434]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:33:22 compute-0 sudo[161432]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:23 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v525: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:23 compute-0 ceph-mon[74927]: pgmap v525: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:23 compute-0 sudo[161584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txsytbptxxsovzzxgsitrrzwfddlvjac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009203.224028-141-184166517556419/AnsiballZ_command.py'
Nov 24 18:33:23 compute-0 sudo[161584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:23 compute-0 python3.9[161586]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:33:23 compute-0 sudo[161584]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:24 compute-0 sudo[161737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmcnorwpfewbpsdsgefokhakzhaewmpz ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764009204.032199-149-106510226938013/AnsiballZ_edpm_nftables_from_files.py'
Nov 24 18:33:24 compute-0 sudo[161737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:24 compute-0 python3[161739]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 24 18:33:24 compute-0 sudo[161737]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:25 compute-0 sudo[161889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thkavljleyjmzmtpujiubipzwdlnjphe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009204.832712-157-187531896417261/AnsiballZ_stat.py'
Nov 24 18:33:25 compute-0 sudo[161889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:25 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v526: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:25 compute-0 ceph-mon[74927]: pgmap v526: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:25 compute-0 python3.9[161891]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:33:25 compute-0 sudo[161889]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:25 compute-0 sudo[162014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnkkrdbjqpghjglwukntgagnbbfwgmqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009204.832712-157-187531896417261/AnsiballZ_copy.py'
Nov 24 18:33:25 compute-0 sudo[162014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:26 compute-0 python3.9[162016]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009204.832712-157-187531896417261/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:33:26 compute-0 sudo[162014]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:26 compute-0 sudo[162166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwpcssiuofgclpwhvspvzitfvzhwlakn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009206.3507297-172-223217342086402/AnsiballZ_stat.py'
Nov 24 18:33:26 compute-0 sudo[162166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:26 compute-0 python3.9[162168]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:33:26 compute-0 sudo[162166]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:27 compute-0 sudo[162291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srcjgocnhhziklpkbpohcgkowmxgduad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009206.3507297-172-223217342086402/AnsiballZ_copy.py'
Nov 24 18:33:27 compute-0 sudo[162291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:27 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v527: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:27 compute-0 ceph-mon[74927]: pgmap v527: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:27 compute-0 python3.9[162293]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009206.3507297-172-223217342086402/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:33:27 compute-0 sudo[162291]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:33:27 compute-0 sudo[162443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drxkaoupgjwxzslrfvshsyhnqqzfwpec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009207.5716815-187-69513261474954/AnsiballZ_stat.py'
Nov 24 18:33:27 compute-0 sudo[162443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:28 compute-0 python3.9[162445]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:33:28 compute-0 sudo[162443]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:28 compute-0 sudo[162568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wceymakafckihrtpunodahsthgaptaha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009207.5716815-187-69513261474954/AnsiballZ_copy.py'
Nov 24 18:33:28 compute-0 sudo[162568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:28 compute-0 python3.9[162570]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009207.5716815-187-69513261474954/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:33:28 compute-0 sudo[162568]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:29 compute-0 sudo[162720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yghtfiqzjswggkkwyznhnvxloesgtmzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009208.7609055-202-119390797509032/AnsiballZ_stat.py'
Nov 24 18:33:29 compute-0 sudo[162720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:29 compute-0 python3.9[162722]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:33:29 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v528: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:29 compute-0 sudo[162720]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:29 compute-0 ceph-mon[74927]: pgmap v528: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:29 compute-0 sudo[162845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntvfpibhoqiemgeocxjdgrumoihsiezw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009208.7609055-202-119390797509032/AnsiballZ_copy.py'
Nov 24 18:33:29 compute-0 sudo[162845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:29 compute-0 python3.9[162847]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009208.7609055-202-119390797509032/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:33:29 compute-0 sudo[162845]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:30 compute-0 sudo[162997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulrlaopgsreyzdcauxmtsxxjqzoqsnoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009209.99195-217-72918704209764/AnsiballZ_stat.py'
Nov 24 18:33:30 compute-0 sudo[162997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:30 compute-0 python3.9[162999]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:33:30 compute-0 sudo[162997]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:30 compute-0 sudo[163122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzwslxumtfjtegrzzkvcryscguzzncqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009209.99195-217-72918704209764/AnsiballZ_copy.py'
Nov 24 18:33:30 compute-0 sudo[163122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:31 compute-0 python3.9[163124]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009209.99195-217-72918704209764/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:33:31 compute-0 sudo[163122]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:31 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v529: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:31 compute-0 ceph-mon[74927]: pgmap v529: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:31 compute-0 sudo[163274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhjjlugeruovfqbrkimevtvjnggwolns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009211.2763202-232-221369663180442/AnsiballZ_file.py'
Nov 24 18:33:31 compute-0 sudo[163274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:31 compute-0 python3.9[163276]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:33:31 compute-0 sudo[163274]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:32 compute-0 sudo[163426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbcwtqlrbptehucxrwqloxkljoyzuyjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009211.9175613-240-89471974041209/AnsiballZ_command.py'
Nov 24 18:33:32 compute-0 sudo[163426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:32 compute-0 python3.9[163428]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:33:32 compute-0 sudo[163426]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:32 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:33:33 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v530: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:33 compute-0 sudo[163581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drgmeblikqobxyqmclcovfxekgjuozti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009212.7160017-248-238883019166491/AnsiballZ_blockinfile.py'
Nov 24 18:33:33 compute-0 sudo[163581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:33 compute-0 ceph-mon[74927]: pgmap v530: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:33 compute-0 python3.9[163583]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:33:33 compute-0 sudo[163581]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:34 compute-0 sudo[163733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajfiuadmiienalectahicjbgygyoptub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009213.7628386-257-108260209348312/AnsiballZ_command.py'
Nov 24 18:33:34 compute-0 sudo[163733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:34 compute-0 python3.9[163735]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:33:34 compute-0 sudo[163733]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:33:34
Nov 24 18:33:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:33:34 compute-0 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 18:33:34 compute-0 ceph-mgr[75218]: [balancer INFO root] pools ['backups', 'vms', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', '.rgw.root']
Nov 24 18:33:34 compute-0 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 18:33:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:33:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:33:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:33:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:33:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:33:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:33:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:33:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:33:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:33:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:33:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:33:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:33:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:33:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:33:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:33:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:33:34 compute-0 sudo[163886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfsvkkvpvubylzavnbuewtgsugltbrcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009214.501275-265-3601908268180/AnsiballZ_stat.py'
Nov 24 18:33:34 compute-0 sudo[163886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:34 compute-0 python3.9[163888]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:33:34 compute-0 sudo[163886]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:35 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v531: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:35 compute-0 ceph-mon[74927]: pgmap v531: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:35 compute-0 sudo[164040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uoiwiisnnhdihrqocdgfplpelqbjvwzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009215.208678-273-233980168088354/AnsiballZ_command.py'
Nov 24 18:33:35 compute-0 sudo[164040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:35 compute-0 python3.9[164042]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:33:35 compute-0 sudo[164040]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:36 compute-0 sudo[164195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbarqivfddkaajhfvwgtztgpmdbtqgyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009216.0250971-281-238599435496196/AnsiballZ_file.py'
Nov 24 18:33:36 compute-0 sudo[164195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:36 compute-0 python3.9[164197]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:33:36 compute-0 sudo[164195]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:37 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v532: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:37 compute-0 ceph-mon[74927]: pgmap v532: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:33:37 compute-0 python3.9[164347]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:33:38 compute-0 sudo[164498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afkmaopkmppcbhajjskuonkpuygpllma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009218.4329307-321-261795324329402/AnsiballZ_command.py'
Nov 24 18:33:38 compute-0 sudo[164498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:38 compute-0 python3.9[164500]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:93:45:69:49" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:33:38 compute-0 ovs-vsctl[164501]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:93:45:69:49 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 24 18:33:38 compute-0 sudo[164498]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:39 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v533: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:39 compute-0 ceph-mon[74927]: pgmap v533: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:39 compute-0 sudo[164651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxsuytoqdhdomgmllpzcaoyigwqdwvmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009219.2289236-330-244054944708503/AnsiballZ_command.py'
Nov 24 18:33:39 compute-0 sudo[164651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:39 compute-0 python3.9[164653]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:33:39 compute-0 sudo[164651]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:40 compute-0 sudo[164806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuqvvocplnoikcyjormfuxwvixaokvlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009219.9738202-338-212442385183146/AnsiballZ_command.py'
Nov 24 18:33:40 compute-0 sudo[164806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:40 compute-0 python3.9[164808]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:33:40 compute-0 ovs-vsctl[164809]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 24 18:33:40 compute-0 sudo[164806]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:41 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v534: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:41 compute-0 ceph-mon[74927]: pgmap v534: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:41 compute-0 python3.9[164959]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:33:41 compute-0 sudo[165111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxsykilxvhliulqlmisuuoaxseoryikd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009221.6625576-355-141669135689515/AnsiballZ_file.py'
Nov 24 18:33:41 compute-0 sudo[165111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:42 compute-0 python3.9[165113]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:33:42 compute-0 sudo[165111]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:33:42 compute-0 sudo[165263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbodxgfeabomtqmhvjubckxhirlxqvju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009222.4366007-363-105293218515134/AnsiballZ_stat.py'
Nov 24 18:33:42 compute-0 sudo[165263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:42 compute-0 python3.9[165265]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:33:42 compute-0 sudo[165263]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:33:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:33:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:33:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:33:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:33:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:33:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:33:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:33:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:33:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:33:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:33:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:33:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 18:33:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:33:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:33:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:33:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 18:33:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:33:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 18:33:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:33:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:33:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:33:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 18:33:43 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v535: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:43 compute-0 sudo[165341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvmvkmvtjygqrqatuughvjwyjugznnoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009222.4366007-363-105293218515134/AnsiballZ_file.py'
Nov 24 18:33:43 compute-0 sudo[165341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:43 compute-0 ceph-mon[74927]: pgmap v535: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:43 compute-0 python3.9[165343]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:33:43 compute-0 sudo[165341]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:43 compute-0 sudo[165493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igdgwncnyiryqgzgxwooxmvncatqzncv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009223.6626122-363-116338521757411/AnsiballZ_stat.py'
Nov 24 18:33:43 compute-0 sudo[165493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:44 compute-0 python3.9[165495]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:33:44 compute-0 sudo[165493]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:44 compute-0 sudo[165571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daypovumiycawnuoxovonpmnhyxyohmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009223.6626122-363-116338521757411/AnsiballZ_file.py'
Nov 24 18:33:44 compute-0 sudo[165571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:44 compute-0 python3.9[165573]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:33:44 compute-0 sudo[165571]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:45 compute-0 sudo[165723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrvleohedgjuugcrsmaoavpyoqwcionc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009224.8297164-386-79246993362222/AnsiballZ_file.py'
Nov 24 18:33:45 compute-0 sudo[165723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:45 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v536: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:45 compute-0 ceph-mon[74927]: pgmap v536: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:45 compute-0 python3.9[165725]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:33:45 compute-0 sudo[165723]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:45 compute-0 sudo[165875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbxdgdtzinxsuxgunsonlaijqmkkyiuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009225.6066167-394-157368717489549/AnsiballZ_stat.py'
Nov 24 18:33:45 compute-0 sudo[165875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:46 compute-0 python3.9[165877]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:33:46 compute-0 sudo[165875]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:46 compute-0 sudo[165953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwjczkkiwmolpbwktvthaqebjopijfdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009225.6066167-394-157368717489549/AnsiballZ_file.py'
Nov 24 18:33:46 compute-0 sudo[165953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:46 compute-0 python3.9[165955]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:33:46 compute-0 sudo[165953]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:47 compute-0 sudo[166105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrivwordzbobqfauxrsepumrjxfmsvds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009226.8478506-406-145437626639772/AnsiballZ_stat.py'
Nov 24 18:33:47 compute-0 sudo[166105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:47 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v537: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:47 compute-0 ceph-mon[74927]: pgmap v537: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:47 compute-0 python3.9[166107]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:33:47 compute-0 sudo[166105]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:33:47 compute-0 sudo[166183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuonpiuubmhozubmeqkjvibpcngoviiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009226.8478506-406-145437626639772/AnsiballZ_file.py'
Nov 24 18:33:47 compute-0 sudo[166183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:47 compute-0 python3.9[166185]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:33:47 compute-0 sudo[166183]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:48 compute-0 sudo[166335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlepimuwczdorvogczaokyydmdzbrgfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009228.1343026-418-213102314738223/AnsiballZ_systemd.py'
Nov 24 18:33:48 compute-0 sudo[166335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:48 compute-0 python3.9[166337]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:33:48 compute-0 systemd[1]: Reloading.
Nov 24 18:33:48 compute-0 systemd-rc-local-generator[166361]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:33:48 compute-0 systemd-sysv-generator[166366]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:33:49 compute-0 sudo[166335]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:49 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v538: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:49 compute-0 ceph-mon[74927]: pgmap v538: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:49 compute-0 sudo[166524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okmrxxtztmjzmatglsnxgmwpgvgrfpbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009229.327116-426-24616359228646/AnsiballZ_stat.py'
Nov 24 18:33:49 compute-0 sudo[166524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:49 compute-0 python3.9[166526]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:33:49 compute-0 sudo[166524]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:50 compute-0 sudo[166602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klssyrcdmgtqagzhvcomyreumryuxogo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009229.327116-426-24616359228646/AnsiballZ_file.py'
Nov 24 18:33:50 compute-0 sudo[166602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:50 compute-0 python3.9[166604]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:33:50 compute-0 sudo[166602]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:50 compute-0 sudo[166754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llnurgkkxrcahrtgumlvgsjryjagxorg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009230.5266967-438-15721831097231/AnsiballZ_stat.py'
Nov 24 18:33:50 compute-0 sudo[166754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:50 compute-0 python3.9[166756]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:33:51 compute-0 sudo[166754]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:51 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v539: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:51 compute-0 sudo[166832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybdwtahsbhawygjbxulfkbgqwhadqtbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009230.5266967-438-15721831097231/AnsiballZ_file.py'
Nov 24 18:33:51 compute-0 sudo[166832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:51 compute-0 ceph-mon[74927]: pgmap v539: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:51 compute-0 python3.9[166834]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:33:51 compute-0 sudo[166832]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:52 compute-0 sudo[166984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sulsxvuaqubsjzehobgogobfrxynclcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009231.651421-450-44835468836006/AnsiballZ_systemd.py'
Nov 24 18:33:52 compute-0 sudo[166984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:52 compute-0 python3.9[166986]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:33:52 compute-0 systemd[1]: Reloading.
Nov 24 18:33:52 compute-0 systemd-rc-local-generator[167014]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:33:52 compute-0 systemd-sysv-generator[167018]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:33:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:33:52 compute-0 systemd[1]: Starting Create netns directory...
Nov 24 18:33:52 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 24 18:33:52 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 24 18:33:52 compute-0 systemd[1]: Finished Create netns directory.
Nov 24 18:33:52 compute-0 sudo[166984]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:53 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v540: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:53 compute-0 ceph-mon[74927]: pgmap v540: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:53 compute-0 sudo[167177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tspkbubkxuzcwwpxrvpugbppcerbsqyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009233.1234257-460-240287112982917/AnsiballZ_file.py'
Nov 24 18:33:53 compute-0 sudo[167177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:53 compute-0 python3.9[167179]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:33:53 compute-0 sudo[167177]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:54 compute-0 sudo[167329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbhomirjcqlvzcwghmximrnxhhgpgoqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009233.8872583-468-106279206090496/AnsiballZ_stat.py'
Nov 24 18:33:54 compute-0 sudo[167329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:54 compute-0 python3.9[167331]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:33:54 compute-0 sudo[167329]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:54 compute-0 sudo[167452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-valmbnqthvlqzcbnygkccnsavpfqqrfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009233.8872583-468-106279206090496/AnsiballZ_copy.py'
Nov 24 18:33:54 compute-0 sudo[167452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:55 compute-0 python3.9[167454]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764009233.8872583-468-106279206090496/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:33:55 compute-0 sudo[167452]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:55 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v541: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:55 compute-0 ceph-mon[74927]: pgmap v541: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:55 compute-0 sudo[167604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptbqxuslvuqaumshakxzodbrebhkwnxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009235.512751-485-237932287148182/AnsiballZ_file.py'
Nov 24 18:33:55 compute-0 sudo[167604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:56 compute-0 python3.9[167606]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:33:56 compute-0 sudo[167604]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:56 compute-0 sudo[167756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvuulwumjdiwzljbhbrhmbutelvfrzur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009236.2765903-493-110960222667399/AnsiballZ_stat.py'
Nov 24 18:33:56 compute-0 sudo[167756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:56 compute-0 python3.9[167758]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:33:56 compute-0 sudo[167756]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:57 compute-0 sudo[167879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wobjxwwptrblqkpabjdgdvpwwhfckztc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009236.2765903-493-110960222667399/AnsiballZ_copy.py'
Nov 24 18:33:57 compute-0 sudo[167879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:57 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v542: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:57 compute-0 python3.9[167881]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764009236.2765903-493-110960222667399/.source.json _original_basename=.i2skxlep follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:33:57 compute-0 ceph-mon[74927]: pgmap v542: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:57 compute-0 sudo[167879]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:33:57 compute-0 sudo[168031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdaoclzjqtdjslkxekcwnmsvzdcuxggp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009237.5870087-508-189675428840169/AnsiballZ_file.py'
Nov 24 18:33:57 compute-0 sudo[168031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:58 compute-0 python3.9[168033]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:33:58 compute-0 sudo[168031]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:59 compute-0 sudo[168184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igjffbaidigvtfcboqnaxjsqdjybpjkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009238.3513255-516-167293926649006/AnsiballZ_stat.py'
Nov 24 18:33:59 compute-0 sudo[168184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:59 compute-0 sudo[168184]: pam_unix(sudo:session): session closed for user root
Nov 24 18:33:59 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v543: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:59 compute-0 ceph-mon[74927]: pgmap v543: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:33:59 compute-0 sudo[168307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahohbecadjyvefayxccdkcdchrrveifa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009238.3513255-516-167293926649006/AnsiballZ_copy.py'
Nov 24 18:33:59 compute-0 sudo[168307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:33:59 compute-0 sudo[168307]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:00 compute-0 sudo[168459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrsdknbkkwfgpnwvchghgfvwpxehxvvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009240.047765-533-179908911578559/AnsiballZ_container_config_data.py'
Nov 24 18:34:00 compute-0 sudo[168459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:00 compute-0 python3.9[168461]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 24 18:34:00 compute-0 sudo[168459]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:01 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v544: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:01 compute-0 ceph-mon[74927]: pgmap v544: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:01 compute-0 sudo[168611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzwijizzavqyenvucjygxnbmeihajvkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009240.9720638-542-30181145882197/AnsiballZ_container_config_hash.py'
Nov 24 18:34:01 compute-0 sudo[168611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:01 compute-0 python3.9[168613]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 24 18:34:01 compute-0 sudo[168611]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:02 compute-0 sudo[168763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuqulhzrftirrtrlpghggvftlrcwspzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009241.8809667-551-97988543677801/AnsiballZ_podman_container_info.py'
Nov 24 18:34:02 compute-0 sudo[168763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:02 compute-0 python3.9[168765]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 24 18:34:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:34:02 compute-0 sudo[168763]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:03 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v545: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:03 compute-0 ceph-mon[74927]: pgmap v545: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:03 compute-0 sudo[168942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akqwyeflfvlvqpltyktltpmlsabraupi ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764009243.4231231-564-165277257276606/AnsiballZ_edpm_container_manage.py'
Nov 24 18:34:03 compute-0 sudo[168942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:04 compute-0 python3[168944]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 24 18:34:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:34:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:34:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:34:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:34:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:34:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:34:05 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v546: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:05 compute-0 ceph-mon[74927]: pgmap v546: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:07 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v547: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:07 compute-0 ceph-mon[74927]: pgmap v547: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:34:08 compute-0 podman[168957]: 2025-11-24 18:34:08.900029894 +0000 UTC m=+4.641267200 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 24 18:34:09 compute-0 podman[169077]: 2025-11-24 18:34:09.046773563 +0000 UTC m=+0.055062422 container create 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:34:09 compute-0 podman[169077]: 2025-11-24 18:34:09.01078114 +0000 UTC m=+0.019069979 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 24 18:34:09 compute-0 python3[168944]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 24 18:34:09 compute-0 sudo[168942]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:09 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v548: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:09 compute-0 ceph-mon[74927]: pgmap v548: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:09 compute-0 sudo[169265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tixplwhwkoonfswyuztxxrhpfbtvghqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009249.401601-572-82216201512757/AnsiballZ_stat.py'
Nov 24 18:34:09 compute-0 sudo[169265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:09 compute-0 python3.9[169267]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:34:09 compute-0 sudo[169265]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:10 compute-0 sudo[169419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjqnbnnvmtnpxerkaridkwwncauzwerx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009250.080918-581-135092568890493/AnsiballZ_file.py'
Nov 24 18:34:10 compute-0 sudo[169419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:10 compute-0 python3.9[169421]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:34:10 compute-0 sudo[169419]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:10 compute-0 sudo[169495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icvregoifrkoueefhfejhfldzzspruyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009250.080918-581-135092568890493/AnsiballZ_stat.py'
Nov 24 18:34:10 compute-0 sudo[169495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:10 compute-0 python3.9[169497]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:34:10 compute-0 sudo[169495]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:11 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v549: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:11 compute-0 ceph-mon[74927]: pgmap v549: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:11 compute-0 sudo[169646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lplhevviwnzhevekoaydjgpqqaojlnkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009251.007633-581-209477034811657/AnsiballZ_copy.py'
Nov 24 18:34:11 compute-0 sudo[169646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:11 compute-0 python3.9[169648]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764009251.007633-581-209477034811657/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:34:11 compute-0 sudo[169646]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:11 compute-0 sudo[169722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dblapdpggbjiauxrsebkpigjvschqtvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009251.007633-581-209477034811657/AnsiballZ_systemd.py'
Nov 24 18:34:11 compute-0 sudo[169722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:12 compute-0 python3.9[169724]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 18:34:12 compute-0 systemd[1]: Reloading.
Nov 24 18:34:12 compute-0 systemd-rc-local-generator[169745]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:34:12 compute-0 systemd-sysv-generator[169749]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:34:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:34:12 compute-0 sudo[169722]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:12 compute-0 sudo[169833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nadfdluvveyujygvqkmgwsojmdcqchhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009251.007633-581-209477034811657/AnsiballZ_systemd.py'
Nov 24 18:34:12 compute-0 sudo[169833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:13 compute-0 python3.9[169835]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:34:13 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v550: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:13 compute-0 ceph-mon[74927]: pgmap v550: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:14 compute-0 systemd[1]: Reloading.
Nov 24 18:34:14 compute-0 systemd-rc-local-generator[169865]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:34:14 compute-0 systemd-sysv-generator[169868]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:34:14 compute-0 systemd[1]: Starting ovn_controller container...
Nov 24 18:34:14 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d7ba1ba1118a7fab99be6adfd7018106b5ccb6e47b758d2c8e6c85a7bb6839d/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 24 18:34:14 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d.
Nov 24 18:34:14 compute-0 podman[169876]: 2025-11-24 18:34:14.580288249 +0000 UTC m=+0.094057469 container init 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:34:14 compute-0 ovn_controller[169892]: + sudo -E kolla_set_configs
Nov 24 18:34:14 compute-0 podman[169876]: 2025-11-24 18:34:14.600326131 +0000 UTC m=+0.114095351 container start 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.license=GPLv2)
Nov 24 18:34:14 compute-0 edpm-start-podman-container[169876]: ovn_controller
Nov 24 18:34:14 compute-0 systemd[1]: Created slice User Slice of UID 0.
Nov 24 18:34:14 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 24 18:34:14 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 24 18:34:14 compute-0 systemd[1]: Starting User Manager for UID 0...
Nov 24 18:34:14 compute-0 systemd[169931]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Nov 24 18:34:14 compute-0 edpm-start-podman-container[169875]: Creating additional drop-in dependency for "ovn_controller" (258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d)
Nov 24 18:34:14 compute-0 podman[169899]: 2025-11-24 18:34:14.687570489 +0000 UTC m=+0.077976446 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251118, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 18:34:14 compute-0 systemd[1]: 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d-73b1918b52bf1f65.service: Main process exited, code=exited, status=1/FAILURE
Nov 24 18:34:14 compute-0 systemd[1]: 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d-73b1918b52bf1f65.service: Failed with result 'exit-code'.
Nov 24 18:34:14 compute-0 systemd[1]: Reloading.
Nov 24 18:34:14 compute-0 systemd-rc-local-generator[169976]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:34:14 compute-0 systemd-sysv-generator[169979]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:34:14 compute-0 systemd[169931]: Queued start job for default target Main User Target.
Nov 24 18:34:14 compute-0 systemd[169931]: Created slice User Application Slice.
Nov 24 18:34:14 compute-0 systemd[169931]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 24 18:34:14 compute-0 systemd[169931]: Started Daily Cleanup of User's Temporary Directories.
Nov 24 18:34:14 compute-0 systemd[169931]: Reached target Paths.
Nov 24 18:34:14 compute-0 systemd[169931]: Reached target Timers.
Nov 24 18:34:14 compute-0 systemd[169931]: Starting D-Bus User Message Bus Socket...
Nov 24 18:34:14 compute-0 systemd[169931]: Starting Create User's Volatile Files and Directories...
Nov 24 18:34:14 compute-0 systemd[169931]: Listening on D-Bus User Message Bus Socket.
Nov 24 18:34:14 compute-0 systemd[169931]: Finished Create User's Volatile Files and Directories.
Nov 24 18:34:14 compute-0 systemd[169931]: Reached target Sockets.
Nov 24 18:34:14 compute-0 systemd[169931]: Reached target Basic System.
Nov 24 18:34:14 compute-0 systemd[169931]: Reached target Main User Target.
Nov 24 18:34:14 compute-0 systemd[169931]: Startup finished in 153ms.
Nov 24 18:34:14 compute-0 systemd[1]: Started User Manager for UID 0.
Nov 24 18:34:14 compute-0 systemd[1]: Started ovn_controller container.
Nov 24 18:34:14 compute-0 systemd[1]: Started Session c1 of User root.
Nov 24 18:34:15 compute-0 sudo[169833]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:15 compute-0 ovn_controller[169892]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 18:34:15 compute-0 ovn_controller[169892]: INFO:__main__:Validating config file
Nov 24 18:34:15 compute-0 ovn_controller[169892]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 18:34:15 compute-0 ovn_controller[169892]: INFO:__main__:Writing out command to execute
Nov 24 18:34:15 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 24 18:34:15 compute-0 ovn_controller[169892]: ++ cat /run_command
Nov 24 18:34:15 compute-0 ovn_controller[169892]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 24 18:34:15 compute-0 ovn_controller[169892]: + ARGS=
Nov 24 18:34:15 compute-0 ovn_controller[169892]: + sudo kolla_copy_cacerts
Nov 24 18:34:15 compute-0 systemd[1]: Started Session c2 of User root.
Nov 24 18:34:15 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 24 18:34:15 compute-0 ovn_controller[169892]: + [[ ! -n '' ]]
Nov 24 18:34:15 compute-0 ovn_controller[169892]: + . kolla_extend_start
Nov 24 18:34:15 compute-0 ovn_controller[169892]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 24 18:34:15 compute-0 ovn_controller[169892]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 24 18:34:15 compute-0 ovn_controller[169892]: + umask 0022
Nov 24 18:34:15 compute-0 ovn_controller[169892]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 24 18:34:15 compute-0 ovn_controller[169892]: 2025-11-24T18:34:15Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 24 18:34:15 compute-0 ovn_controller[169892]: 2025-11-24T18:34:15Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 24 18:34:15 compute-0 ovn_controller[169892]: 2025-11-24T18:34:15Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 24 18:34:15 compute-0 ovn_controller[169892]: 2025-11-24T18:34:15Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 24 18:34:15 compute-0 ovn_controller[169892]: 2025-11-24T18:34:15Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 24 18:34:15 compute-0 ovn_controller[169892]: 2025-11-24T18:34:15Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 24 18:34:15 compute-0 NetworkManager[48851]: <info>  [1764009255.1549] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Nov 24 18:34:15 compute-0 NetworkManager[48851]: <info>  [1764009255.1556] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 18:34:15 compute-0 NetworkManager[48851]: <info>  [1764009255.1567] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 24 18:34:15 compute-0 NetworkManager[48851]: <info>  [1764009255.1572] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Nov 24 18:34:15 compute-0 NetworkManager[48851]: <info>  [1764009255.1576] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 24 18:34:15 compute-0 kernel: br-int: entered promiscuous mode
Nov 24 18:34:15 compute-0 ovn_controller[169892]: 2025-11-24T18:34:15Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 24 18:34:15 compute-0 systemd-udevd[170024]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 18:34:15 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v551: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:15 compute-0 ovn_controller[169892]: 2025-11-24T18:34:15Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 24 18:34:15 compute-0 ovn_controller[169892]: 2025-11-24T18:34:15Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 24 18:34:15 compute-0 ovn_controller[169892]: 2025-11-24T18:34:15Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 24 18:34:15 compute-0 ovn_controller[169892]: 2025-11-24T18:34:15Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Nov 24 18:34:15 compute-0 ovn_controller[169892]: 2025-11-24T18:34:15Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 24 18:34:15 compute-0 ovn_controller[169892]: 2025-11-24T18:34:15Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 24 18:34:15 compute-0 ovn_controller[169892]: 2025-11-24T18:34:15Z|00014|main|INFO|OVS feature set changed, force recompute.
Nov 24 18:34:15 compute-0 ovn_controller[169892]: 2025-11-24T18:34:15Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 24 18:34:15 compute-0 ovn_controller[169892]: 2025-11-24T18:34:15Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 24 18:34:15 compute-0 ovn_controller[169892]: 2025-11-24T18:34:15Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 24 18:34:15 compute-0 ovn_controller[169892]: 2025-11-24T18:34:15Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 24 18:34:15 compute-0 ovn_controller[169892]: 2025-11-24T18:34:15Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 24 18:34:15 compute-0 ovn_controller[169892]: 2025-11-24T18:34:15Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 24 18:34:15 compute-0 ovn_controller[169892]: 2025-11-24T18:34:15Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 24 18:34:15 compute-0 ovn_controller[169892]: 2025-11-24T18:34:15Z|00022|main|INFO|OVS feature set changed, force recompute.
Nov 24 18:34:15 compute-0 ovn_controller[169892]: 2025-11-24T18:34:15Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 24 18:34:15 compute-0 ovn_controller[169892]: 2025-11-24T18:34:15Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 24 18:34:15 compute-0 ovn_controller[169892]: 2025-11-24T18:34:15Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 24 18:34:15 compute-0 ovn_controller[169892]: 2025-11-24T18:34:15Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 24 18:34:15 compute-0 ovn_controller[169892]: 2025-11-24T18:34:15Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 24 18:34:15 compute-0 ovn_controller[169892]: 2025-11-24T18:34:15Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 24 18:34:15 compute-0 ovn_controller[169892]: 2025-11-24T18:34:15Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 24 18:34:15 compute-0 ovn_controller[169892]: 2025-11-24T18:34:15Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 24 18:34:15 compute-0 NetworkManager[48851]: <info>  [1764009255.2960] manager: (ovn-931e5e-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 24 18:34:15 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Nov 24 18:34:15 compute-0 systemd-udevd[170026]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 18:34:15 compute-0 NetworkManager[48851]: <info>  [1764009255.3233] device (genev_sys_6081): carrier: link connected
Nov 24 18:34:15 compute-0 NetworkManager[48851]: <info>  [1764009255.3236] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Nov 24 18:34:15 compute-0 ceph-mon[74927]: pgmap v551: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:15 compute-0 sudo[170154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idxxosfnkomwbqghnzuuzwomxclxrham ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009255.358692-609-130767014065788/AnsiballZ_command.py'
Nov 24 18:34:15 compute-0 sudo[170154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:15 compute-0 python3.9[170156]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:34:15 compute-0 ovs-vsctl[170157]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 24 18:34:15 compute-0 sudo[170154]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:16 compute-0 sudo[170308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jycehifhpqpyuxxkmpyylgrwkfwqbnsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009256.0345418-617-130373604314244/AnsiballZ_command.py'
Nov 24 18:34:16 compute-0 sudo[170308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:16 compute-0 python3.9[170310]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:34:16 compute-0 ovs-vsctl[170312]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 24 18:34:16 compute-0 sudo[170308]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:17 compute-0 sudo[170463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egnewozcikiynugrwaeqgsxbkomaydhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009256.9956305-631-17119806236274/AnsiballZ_command.py'
Nov 24 18:34:17 compute-0 sudo[170463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:17 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v552: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:17 compute-0 ceph-mon[74927]: pgmap v552: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:17 compute-0 python3.9[170465]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:34:17 compute-0 ovs-vsctl[170466]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 24 18:34:17 compute-0 sudo[170463]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:34:17 compute-0 sshd-session[158405]: Connection closed by 192.168.122.30 port 60402
Nov 24 18:34:17 compute-0 sshd-session[158402]: pam_unix(sshd:session): session closed for user zuul
Nov 24 18:34:17 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Nov 24 18:34:17 compute-0 systemd[1]: session-48.scope: Consumed 56.448s CPU time.
Nov 24 18:34:17 compute-0 systemd-logind[822]: Session 48 logged out. Waiting for processes to exit.
Nov 24 18:34:17 compute-0 systemd-logind[822]: Removed session 48.
Nov 24 18:34:19 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v553: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:19 compute-0 sudo[170491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:34:19 compute-0 sudo[170491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:34:19 compute-0 sudo[170491]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:19 compute-0 sudo[170516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:34:19 compute-0 sudo[170516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:34:19 compute-0 sudo[170516]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:19 compute-0 ceph-mon[74927]: pgmap v553: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:19 compute-0 sudo[170541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:34:19 compute-0 sudo[170541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:34:19 compute-0 sudo[170541]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:19 compute-0 sudo[170566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 18:34:19 compute-0 sudo[170566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:34:19 compute-0 sudo[170566]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:34:19 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:34:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:34:19 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:34:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:34:19 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:34:19 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 09394413-e15c-4775-8e6b-b00d248be74d does not exist
Nov 24 18:34:19 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev f045bbd7-1f35-41f8-a079-87b6455dcac1 does not exist
Nov 24 18:34:19 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev a69aa5c2-74d6-4547-bf67-32847d327369 does not exist
Nov 24 18:34:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 18:34:19 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:34:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 18:34:19 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:34:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:34:19 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:34:19 compute-0 sudo[170622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:34:19 compute-0 sudo[170622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:34:19 compute-0 sudo[170622]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:20 compute-0 sudo[170647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:34:20 compute-0 sudo[170647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:34:20 compute-0 sudo[170647]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:20 compute-0 sudo[170672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:34:20 compute-0 sudo[170672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:34:20 compute-0 sudo[170672]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:20 compute-0 sudo[170697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 18:34:20 compute-0 sudo[170697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:34:20 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:34:20 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:34:20 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:34:20 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:34:20 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:34:20 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:34:20 compute-0 podman[170762]: 2025-11-24 18:34:20.548314522 +0000 UTC m=+0.070353064 container create 5289f896b0bcfd5d291c0e4eab84e7e0f1b1aed42d5fc060ce80ba6e6adf0203 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:34:20 compute-0 podman[170762]: 2025-11-24 18:34:20.498332139 +0000 UTC m=+0.020370701 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:34:20 compute-0 systemd[1]: Started libpod-conmon-5289f896b0bcfd5d291c0e4eab84e7e0f1b1aed42d5fc060ce80ba6e6adf0203.scope.
Nov 24 18:34:20 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:34:20 compute-0 podman[170762]: 2025-11-24 18:34:20.756625815 +0000 UTC m=+0.278664387 container init 5289f896b0bcfd5d291c0e4eab84e7e0f1b1aed42d5fc060ce80ba6e6adf0203 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_einstein, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 24 18:34:20 compute-0 podman[170762]: 2025-11-24 18:34:20.767666262 +0000 UTC m=+0.289704804 container start 5289f896b0bcfd5d291c0e4eab84e7e0f1b1aed42d5fc060ce80ba6e6adf0203 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:34:20 compute-0 podman[170762]: 2025-11-24 18:34:20.77081049 +0000 UTC m=+0.292849042 container attach 5289f896b0bcfd5d291c0e4eab84e7e0f1b1aed42d5fc060ce80ba6e6adf0203 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_einstein, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:34:20 compute-0 vigorous_einstein[170776]: 167 167
Nov 24 18:34:20 compute-0 systemd[1]: libpod-5289f896b0bcfd5d291c0e4eab84e7e0f1b1aed42d5fc060ce80ba6e6adf0203.scope: Deactivated successfully.
Nov 24 18:34:20 compute-0 podman[170762]: 2025-11-24 18:34:20.773780755 +0000 UTC m=+0.295819337 container died 5289f896b0bcfd5d291c0e4eab84e7e0f1b1aed42d5fc060ce80ba6e6adf0203 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_einstein, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Nov 24 18:34:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ed873de0b677265f2ef3754e8cfb95ae3d8b1ac49fb43584d6a71fe74407b4e-merged.mount: Deactivated successfully.
Nov 24 18:34:20 compute-0 podman[170762]: 2025-11-24 18:34:20.817279555 +0000 UTC m=+0.339318097 container remove 5289f896b0bcfd5d291c0e4eab84e7e0f1b1aed42d5fc060ce80ba6e6adf0203 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_einstein, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 24 18:34:20 compute-0 systemd[1]: libpod-conmon-5289f896b0bcfd5d291c0e4eab84e7e0f1b1aed42d5fc060ce80ba6e6adf0203.scope: Deactivated successfully.
Nov 24 18:34:20 compute-0 podman[170802]: 2025-11-24 18:34:20.989315238 +0000 UTC m=+0.043100681 container create 10e217e099d59753855e0391c5f605a2c5b7806fa48a1daf963c964bce9b2f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kilby, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 24 18:34:21 compute-0 systemd[1]: Started libpod-conmon-10e217e099d59753855e0391c5f605a2c5b7806fa48a1daf963c964bce9b2f5b.scope.
Nov 24 18:34:21 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:34:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cfb1a84c9f79d4189214b3f31c81f4ed4a1b349db3b706daaf941efe82c61a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:34:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cfb1a84c9f79d4189214b3f31c81f4ed4a1b349db3b706daaf941efe82c61a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:34:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cfb1a84c9f79d4189214b3f31c81f4ed4a1b349db3b706daaf941efe82c61a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:34:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cfb1a84c9f79d4189214b3f31c81f4ed4a1b349db3b706daaf941efe82c61a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:34:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cfb1a84c9f79d4189214b3f31c81f4ed4a1b349db3b706daaf941efe82c61a8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:34:21 compute-0 podman[170802]: 2025-11-24 18:34:21.061300563 +0000 UTC m=+0.115085996 container init 10e217e099d59753855e0391c5f605a2c5b7806fa48a1daf963c964bce9b2f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kilby, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:34:21 compute-0 podman[170802]: 2025-11-24 18:34:20.972122117 +0000 UTC m=+0.025907540 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:34:21 compute-0 podman[170802]: 2025-11-24 18:34:21.069917219 +0000 UTC m=+0.123702622 container start 10e217e099d59753855e0391c5f605a2c5b7806fa48a1daf963c964bce9b2f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 18:34:21 compute-0 podman[170802]: 2025-11-24 18:34:21.073197791 +0000 UTC m=+0.126983214 container attach 10e217e099d59753855e0391c5f605a2c5b7806fa48a1daf963c964bce9b2f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kilby, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Nov 24 18:34:21 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v554: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:21 compute-0 ceph-mon[74927]: pgmap v554: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:22 compute-0 festive_kilby[170818]: --> passed data devices: 0 physical, 3 LVM
Nov 24 18:34:22 compute-0 festive_kilby[170818]: --> relative data size: 1.0
Nov 24 18:34:22 compute-0 festive_kilby[170818]: --> All data devices are unavailable
Nov 24 18:34:22 compute-0 systemd[1]: libpod-10e217e099d59753855e0391c5f605a2c5b7806fa48a1daf963c964bce9b2f5b.scope: Deactivated successfully.
Nov 24 18:34:22 compute-0 systemd[1]: libpod-10e217e099d59753855e0391c5f605a2c5b7806fa48a1daf963c964bce9b2f5b.scope: Consumed 1.027s CPU time.
Nov 24 18:34:22 compute-0 podman[170802]: 2025-11-24 18:34:22.142540559 +0000 UTC m=+1.196325972 container died 10e217e099d59753855e0391c5f605a2c5b7806fa48a1daf963c964bce9b2f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kilby, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 18:34:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-9cfb1a84c9f79d4189214b3f31c81f4ed4a1b349db3b706daaf941efe82c61a8-merged.mount: Deactivated successfully.
Nov 24 18:34:22 compute-0 podman[170802]: 2025-11-24 18:34:22.206177385 +0000 UTC m=+1.259962778 container remove 10e217e099d59753855e0391c5f605a2c5b7806fa48a1daf963c964bce9b2f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kilby, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:34:22 compute-0 systemd[1]: libpod-conmon-10e217e099d59753855e0391c5f605a2c5b7806fa48a1daf963c964bce9b2f5b.scope: Deactivated successfully.
Nov 24 18:34:22 compute-0 sudo[170697]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:22 compute-0 sudo[170858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:34:22 compute-0 sudo[170858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:34:22 compute-0 sudo[170858]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:22 compute-0 sudo[170883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:34:22 compute-0 sudo[170883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:34:22 compute-0 sudo[170883]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:22 compute-0 sudo[170908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:34:22 compute-0 sudo[170908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:34:22 compute-0 sudo[170908]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:22 compute-0 sudo[170933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- lvm list --format json
Nov 24 18:34:22 compute-0 sudo[170933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:34:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:34:22 compute-0 podman[170999]: 2025-11-24 18:34:22.775763985 +0000 UTC m=+0.036200049 container create 8330551fd65cdf9717daa997fd4259c1bd4a174f47b8e1a1e5c9b34194c589d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_herschel, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:34:22 compute-0 systemd[1]: Started libpod-conmon-8330551fd65cdf9717daa997fd4259c1bd4a174f47b8e1a1e5c9b34194c589d9.scope.
Nov 24 18:34:22 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:34:22 compute-0 podman[170999]: 2025-11-24 18:34:22.84897811 +0000 UTC m=+0.109414184 container init 8330551fd65cdf9717daa997fd4259c1bd4a174f47b8e1a1e5c9b34194c589d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_herschel, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 24 18:34:22 compute-0 podman[170999]: 2025-11-24 18:34:22.853821712 +0000 UTC m=+0.114257766 container start 8330551fd65cdf9717daa997fd4259c1bd4a174f47b8e1a1e5c9b34194c589d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_herschel, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:34:22 compute-0 podman[170999]: 2025-11-24 18:34:22.759958269 +0000 UTC m=+0.020394353 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:34:22 compute-0 podman[170999]: 2025-11-24 18:34:22.857096604 +0000 UTC m=+0.117532658 container attach 8330551fd65cdf9717daa997fd4259c1bd4a174f47b8e1a1e5c9b34194c589d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 18:34:22 compute-0 reverent_herschel[171015]: 167 167
Nov 24 18:34:22 compute-0 systemd[1]: libpod-8330551fd65cdf9717daa997fd4259c1bd4a174f47b8e1a1e5c9b34194c589d9.scope: Deactivated successfully.
Nov 24 18:34:22 compute-0 podman[170999]: 2025-11-24 18:34:22.859408442 +0000 UTC m=+0.119844486 container died 8330551fd65cdf9717daa997fd4259c1bd4a174f47b8e1a1e5c9b34194c589d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_herschel, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 24 18:34:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d590cc5543b40d4b26fab2c33c48fe79aab9cd2534c3bbc633c73aaf16ff5b4-merged.mount: Deactivated successfully.
Nov 24 18:34:22 compute-0 podman[170999]: 2025-11-24 18:34:22.892478911 +0000 UTC m=+0.152915005 container remove 8330551fd65cdf9717daa997fd4259c1bd4a174f47b8e1a1e5c9b34194c589d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_herschel, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:34:22 compute-0 systemd[1]: libpod-conmon-8330551fd65cdf9717daa997fd4259c1bd4a174f47b8e1a1e5c9b34194c589d9.scope: Deactivated successfully.
Nov 24 18:34:23 compute-0 podman[171038]: 2025-11-24 18:34:23.064176306 +0000 UTC m=+0.043181834 container create ac06c040cb15be6c0c9b0944a2429b2e3eba7e0e45c6b6ad65b0cc4b04fd2df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_curie, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 18:34:23 compute-0 systemd[1]: Started libpod-conmon-ac06c040cb15be6c0c9b0944a2429b2e3eba7e0e45c6b6ad65b0cc4b04fd2df2.scope.
Nov 24 18:34:23 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:34:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fc08636e7a676dc42ac5fc7dfcef6bdcba4ec8149da59bb5213cae54e231929/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:34:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fc08636e7a676dc42ac5fc7dfcef6bdcba4ec8149da59bb5213cae54e231929/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:34:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fc08636e7a676dc42ac5fc7dfcef6bdcba4ec8149da59bb5213cae54e231929/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:34:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fc08636e7a676dc42ac5fc7dfcef6bdcba4ec8149da59bb5213cae54e231929/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:34:23 compute-0 podman[171038]: 2025-11-24 18:34:23.043143728 +0000 UTC m=+0.022149296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:34:23 compute-0 podman[171038]: 2025-11-24 18:34:23.138663073 +0000 UTC m=+0.117668611 container init ac06c040cb15be6c0c9b0944a2429b2e3eba7e0e45c6b6ad65b0cc4b04fd2df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 18:34:23 compute-0 podman[171038]: 2025-11-24 18:34:23.144189491 +0000 UTC m=+0.123194999 container start ac06c040cb15be6c0c9b0944a2429b2e3eba7e0e45c6b6ad65b0cc4b04fd2df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:34:23 compute-0 podman[171038]: 2025-11-24 18:34:23.146735575 +0000 UTC m=+0.125741093 container attach ac06c040cb15be6c0c9b0944a2429b2e3eba7e0e45c6b6ad65b0cc4b04fd2df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_curie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:34:23 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v555: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:23 compute-0 ceph-mon[74927]: pgmap v555: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:23 compute-0 sshd-session[171059]: Accepted publickey for zuul from 192.168.122.30 port 43756 ssh2: ECDSA SHA256:jrbRhTO0vQ5011EeK1ZbrK2vK+fcQyIDL9kUBqDHBYY
Nov 24 18:34:23 compute-0 systemd-logind[822]: New session 50 of user zuul.
Nov 24 18:34:23 compute-0 systemd[1]: Started Session 50 of User zuul.
Nov 24 18:34:23 compute-0 sshd-session[171059]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 18:34:23 compute-0 interesting_curie[171054]: {
Nov 24 18:34:23 compute-0 interesting_curie[171054]:     "0": [
Nov 24 18:34:23 compute-0 interesting_curie[171054]:         {
Nov 24 18:34:23 compute-0 interesting_curie[171054]:             "devices": [
Nov 24 18:34:23 compute-0 interesting_curie[171054]:                 "/dev/loop3"
Nov 24 18:34:23 compute-0 interesting_curie[171054]:             ],
Nov 24 18:34:23 compute-0 interesting_curie[171054]:             "lv_name": "ceph_lv0",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:             "lv_size": "21470642176",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:             "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:             "name": "ceph_lv0",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:             "tags": {
Nov 24 18:34:23 compute-0 interesting_curie[171054]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:                 "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:                 "ceph.cluster_name": "ceph",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:                 "ceph.crush_device_class": "",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:                 "ceph.encrypted": "0",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:                 "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:                 "ceph.osd_id": "0",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:                 "ceph.type": "block",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:                 "ceph.vdo": "0"
Nov 24 18:34:23 compute-0 interesting_curie[171054]:             },
Nov 24 18:34:23 compute-0 interesting_curie[171054]:             "type": "block",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:             "vg_name": "ceph_vg0"
Nov 24 18:34:23 compute-0 interesting_curie[171054]:         }
Nov 24 18:34:23 compute-0 interesting_curie[171054]:     ],
Nov 24 18:34:23 compute-0 interesting_curie[171054]:     "1": [
Nov 24 18:34:23 compute-0 interesting_curie[171054]:         {
Nov 24 18:34:23 compute-0 interesting_curie[171054]:             "devices": [
Nov 24 18:34:23 compute-0 interesting_curie[171054]:                 "/dev/loop4"
Nov 24 18:34:23 compute-0 interesting_curie[171054]:             ],
Nov 24 18:34:23 compute-0 interesting_curie[171054]:             "lv_name": "ceph_lv1",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:             "lv_size": "21470642176",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:             "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:             "name": "ceph_lv1",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:             "tags": {
Nov 24 18:34:23 compute-0 interesting_curie[171054]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:                 "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:                 "ceph.cluster_name": "ceph",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:                 "ceph.crush_device_class": "",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:                 "ceph.encrypted": "0",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:                 "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:                 "ceph.osd_id": "1",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:                 "ceph.type": "block",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:                 "ceph.vdo": "0"
Nov 24 18:34:23 compute-0 interesting_curie[171054]:             },
Nov 24 18:34:23 compute-0 interesting_curie[171054]:             "type": "block",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:             "vg_name": "ceph_vg1"
Nov 24 18:34:23 compute-0 interesting_curie[171054]:         }
Nov 24 18:34:23 compute-0 interesting_curie[171054]:     ],
Nov 24 18:34:23 compute-0 interesting_curie[171054]:     "2": [
Nov 24 18:34:23 compute-0 interesting_curie[171054]:         {
Nov 24 18:34:23 compute-0 interesting_curie[171054]:             "devices": [
Nov 24 18:34:23 compute-0 interesting_curie[171054]:                 "/dev/loop5"
Nov 24 18:34:23 compute-0 interesting_curie[171054]:             ],
Nov 24 18:34:23 compute-0 interesting_curie[171054]:             "lv_name": "ceph_lv2",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:             "lv_size": "21470642176",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:             "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:             "name": "ceph_lv2",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:             "tags": {
Nov 24 18:34:23 compute-0 interesting_curie[171054]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:                 "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:                 "ceph.cluster_name": "ceph",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:                 "ceph.crush_device_class": "",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:                 "ceph.encrypted": "0",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:                 "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:                 "ceph.osd_id": "2",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:                 "ceph.type": "block",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:                 "ceph.vdo": "0"
Nov 24 18:34:23 compute-0 interesting_curie[171054]:             },
Nov 24 18:34:23 compute-0 interesting_curie[171054]:             "type": "block",
Nov 24 18:34:23 compute-0 interesting_curie[171054]:             "vg_name": "ceph_vg2"
Nov 24 18:34:23 compute-0 interesting_curie[171054]:         }
Nov 24 18:34:23 compute-0 interesting_curie[171054]:     ]
Nov 24 18:34:23 compute-0 interesting_curie[171054]: }
Nov 24 18:34:23 compute-0 systemd[1]: libpod-ac06c040cb15be6c0c9b0944a2429b2e3eba7e0e45c6b6ad65b0cc4b04fd2df2.scope: Deactivated successfully.
Nov 24 18:34:23 compute-0 podman[171038]: 2025-11-24 18:34:23.919359126 +0000 UTC m=+0.898364644 container died ac06c040cb15be6c0c9b0944a2429b2e3eba7e0e45c6b6ad65b0cc4b04fd2df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_curie, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:34:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fc08636e7a676dc42ac5fc7dfcef6bdcba4ec8149da59bb5213cae54e231929-merged.mount: Deactivated successfully.
Nov 24 18:34:23 compute-0 podman[171038]: 2025-11-24 18:34:23.975631706 +0000 UTC m=+0.954637224 container remove ac06c040cb15be6c0c9b0944a2429b2e3eba7e0e45c6b6ad65b0cc4b04fd2df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_curie, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 24 18:34:23 compute-0 systemd[1]: libpod-conmon-ac06c040cb15be6c0c9b0944a2429b2e3eba7e0e45c6b6ad65b0cc4b04fd2df2.scope: Deactivated successfully.
Nov 24 18:34:24 compute-0 sudo[170933]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:24 compute-0 sudo[171204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:34:24 compute-0 sudo[171204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:34:24 compute-0 sudo[171204]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:24 compute-0 sudo[171256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:34:24 compute-0 sudo[171256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:34:24 compute-0 sudo[171256]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:24 compute-0 sudo[171281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:34:24 compute-0 sudo[171281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:34:24 compute-0 sudo[171281]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:24 compute-0 sudo[171306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- raw list --format json
Nov 24 18:34:24 compute-0 sudo[171306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:34:24 compute-0 python3.9[171255]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:34:24 compute-0 podman[171373]: 2025-11-24 18:34:24.495633953 +0000 UTC m=+0.035007419 container create 7965a8de23f34c4987341404fad7219283ccf7bd083937e14488fe41a634b444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 24 18:34:24 compute-0 systemd[1]: Started libpod-conmon-7965a8de23f34c4987341404fad7219283ccf7bd083937e14488fe41a634b444.scope.
Nov 24 18:34:24 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:34:24 compute-0 podman[171373]: 2025-11-24 18:34:24.57168427 +0000 UTC m=+0.111057736 container init 7965a8de23f34c4987341404fad7219283ccf7bd083937e14488fe41a634b444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lumiere, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 24 18:34:24 compute-0 podman[171373]: 2025-11-24 18:34:24.479656182 +0000 UTC m=+0.019029668 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:34:24 compute-0 podman[171373]: 2025-11-24 18:34:24.578522171 +0000 UTC m=+0.117895637 container start 7965a8de23f34c4987341404fad7219283ccf7bd083937e14488fe41a634b444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:34:24 compute-0 podman[171373]: 2025-11-24 18:34:24.581339722 +0000 UTC m=+0.120713208 container attach 7965a8de23f34c4987341404fad7219283ccf7bd083937e14488fe41a634b444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 18:34:24 compute-0 happy_lumiere[171389]: 167 167
Nov 24 18:34:24 compute-0 systemd[1]: libpod-7965a8de23f34c4987341404fad7219283ccf7bd083937e14488fe41a634b444.scope: Deactivated successfully.
Nov 24 18:34:24 compute-0 podman[171373]: 2025-11-24 18:34:24.583602799 +0000 UTC m=+0.122976265 container died 7965a8de23f34c4987341404fad7219283ccf7bd083937e14488fe41a634b444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 18:34:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-794876e19050cc03d667631f1afc227f2ae79d25977a8677fe9cec7a71de7e43-merged.mount: Deactivated successfully.
Nov 24 18:34:24 compute-0 podman[171373]: 2025-11-24 18:34:24.614713779 +0000 UTC m=+0.154087235 container remove 7965a8de23f34c4987341404fad7219283ccf7bd083937e14488fe41a634b444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lumiere, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:34:24 compute-0 systemd[1]: libpod-conmon-7965a8de23f34c4987341404fad7219283ccf7bd083937e14488fe41a634b444.scope: Deactivated successfully.
Nov 24 18:34:24 compute-0 podman[171436]: 2025-11-24 18:34:24.760886513 +0000 UTC m=+0.039303866 container create e4d3819c1d3abf03908cec919b83824b54ce77b305b6b195860862fcf2a7682c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 24 18:34:24 compute-0 systemd[1]: Started libpod-conmon-e4d3819c1d3abf03908cec919b83824b54ce77b305b6b195860862fcf2a7682c.scope.
Nov 24 18:34:24 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2da7d3f628998e22320d7492a6c16ce37ed113f9be128c7b64929dd5a6aff5f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2da7d3f628998e22320d7492a6c16ce37ed113f9be128c7b64929dd5a6aff5f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2da7d3f628998e22320d7492a6c16ce37ed113f9be128c7b64929dd5a6aff5f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:34:24 compute-0 podman[171436]: 2025-11-24 18:34:24.743944908 +0000 UTC m=+0.022362281 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2da7d3f628998e22320d7492a6c16ce37ed113f9be128c7b64929dd5a6aff5f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:34:24 compute-0 podman[171436]: 2025-11-24 18:34:24.847263848 +0000 UTC m=+0.125681261 container init e4d3819c1d3abf03908cec919b83824b54ce77b305b6b195860862fcf2a7682c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_feynman, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 18:34:24 compute-0 podman[171436]: 2025-11-24 18:34:24.854506259 +0000 UTC m=+0.132923612 container start e4d3819c1d3abf03908cec919b83824b54ce77b305b6b195860862fcf2a7682c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:34:24 compute-0 podman[171436]: 2025-11-24 18:34:24.857576106 +0000 UTC m=+0.135993459 container attach e4d3819c1d3abf03908cec919b83824b54ce77b305b6b195860862fcf2a7682c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_feynman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:34:25 compute-0 systemd[1]: Stopping User Manager for UID 0...
Nov 24 18:34:25 compute-0 systemd[169931]: Activating special unit Exit the Session...
Nov 24 18:34:25 compute-0 systemd[169931]: Stopped target Main User Target.
Nov 24 18:34:25 compute-0 systemd[169931]: Stopped target Basic System.
Nov 24 18:34:25 compute-0 systemd[169931]: Stopped target Paths.
Nov 24 18:34:25 compute-0 systemd[169931]: Stopped target Sockets.
Nov 24 18:34:25 compute-0 systemd[169931]: Stopped target Timers.
Nov 24 18:34:25 compute-0 systemd[169931]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 24 18:34:25 compute-0 systemd[169931]: Closed D-Bus User Message Bus Socket.
Nov 24 18:34:25 compute-0 systemd[169931]: Stopped Create User's Volatile Files and Directories.
Nov 24 18:34:25 compute-0 systemd[169931]: Removed slice User Application Slice.
Nov 24 18:34:25 compute-0 systemd[169931]: Reached target Shutdown.
Nov 24 18:34:25 compute-0 systemd[169931]: Finished Exit the Session.
Nov 24 18:34:25 compute-0 systemd[169931]: Reached target Exit the Session.
Nov 24 18:34:25 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Nov 24 18:34:25 compute-0 systemd[1]: Stopped User Manager for UID 0.
Nov 24 18:34:25 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v556: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:25 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 24 18:34:25 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 24 18:34:25 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 24 18:34:25 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 24 18:34:25 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Nov 24 18:34:25 compute-0 sudo[171586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tckeeoubrgydluiqudubiqrwaznbbgyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009264.830705-34-69833731847658/AnsiballZ_file.py'
Nov 24 18:34:25 compute-0 sudo[171586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:25 compute-0 ceph-mon[74927]: pgmap v556: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:25 compute-0 python3.9[171588]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:34:25 compute-0 sudo[171586]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:25 compute-0 admiring_feynman[171460]: {
Nov 24 18:34:25 compute-0 admiring_feynman[171460]:     "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 18:34:25 compute-0 admiring_feynman[171460]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:34:25 compute-0 admiring_feynman[171460]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 18:34:25 compute-0 admiring_feynman[171460]:         "osd_id": 0,
Nov 24 18:34:25 compute-0 admiring_feynman[171460]:         "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:34:25 compute-0 admiring_feynman[171460]:         "type": "bluestore"
Nov 24 18:34:25 compute-0 admiring_feynman[171460]:     },
Nov 24 18:34:25 compute-0 admiring_feynman[171460]:     "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 18:34:25 compute-0 admiring_feynman[171460]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:34:25 compute-0 admiring_feynman[171460]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 18:34:25 compute-0 admiring_feynman[171460]:         "osd_id": 1,
Nov 24 18:34:25 compute-0 admiring_feynman[171460]:         "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:34:25 compute-0 admiring_feynman[171460]:         "type": "bluestore"
Nov 24 18:34:25 compute-0 admiring_feynman[171460]:     },
Nov 24 18:34:25 compute-0 admiring_feynman[171460]:     "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 18:34:25 compute-0 admiring_feynman[171460]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:34:25 compute-0 admiring_feynman[171460]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 18:34:25 compute-0 admiring_feynman[171460]:         "osd_id": 2,
Nov 24 18:34:25 compute-0 admiring_feynman[171460]:         "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:34:25 compute-0 admiring_feynman[171460]:         "type": "bluestore"
Nov 24 18:34:25 compute-0 admiring_feynman[171460]:     }
Nov 24 18:34:25 compute-0 admiring_feynman[171460]: }
Nov 24 18:34:25 compute-0 systemd[1]: libpod-e4d3819c1d3abf03908cec919b83824b54ce77b305b6b195860862fcf2a7682c.scope: Deactivated successfully.
Nov 24 18:34:25 compute-0 systemd[1]: libpod-e4d3819c1d3abf03908cec919b83824b54ce77b305b6b195860862fcf2a7682c.scope: Consumed 1.033s CPU time.
Nov 24 18:34:25 compute-0 podman[171436]: 2025-11-24 18:34:25.901430296 +0000 UTC m=+1.179847679 container died e4d3819c1d3abf03908cec919b83824b54ce77b305b6b195860862fcf2a7682c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 18:34:26 compute-0 sudo[171795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltndesieaiutvqlqypsdrarrridyffeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009265.7237864-34-211954807554707/AnsiballZ_file.py'
Nov 24 18:34:26 compute-0 sudo[171795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-2da7d3f628998e22320d7492a6c16ce37ed113f9be128c7b64929dd5a6aff5f4-merged.mount: Deactivated successfully.
Nov 24 18:34:26 compute-0 podman[171436]: 2025-11-24 18:34:26.849175136 +0000 UTC m=+2.127592489 container remove e4d3819c1d3abf03908cec919b83824b54ce77b305b6b195860862fcf2a7682c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_feynman, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:34:26 compute-0 systemd[1]: libpod-conmon-e4d3819c1d3abf03908cec919b83824b54ce77b305b6b195860862fcf2a7682c.scope: Deactivated successfully.
Nov 24 18:34:26 compute-0 sudo[171306]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:26 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:34:26 compute-0 python3.9[171797]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:34:26 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:34:26 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:34:26 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:34:26 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev b69304bb-166b-4453-9384-8c965b360dcb does not exist
Nov 24 18:34:26 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 6d70bf40-0e5c-429f-811e-7faaecd6b75b does not exist
Nov 24 18:34:26 compute-0 sudo[171795]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:26 compute-0 sudo[171799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:34:26 compute-0 sudo[171799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:34:26 compute-0 sudo[171799]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:27 compute-0 sudo[171847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:34:27 compute-0 sudo[171847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:34:27 compute-0 sudo[171847]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:27 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v557: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:27 compute-0 sudo[171998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwbbwesbdnjjxiujsxflyypxcbdjmnlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009267.088817-34-204801023982460/AnsiballZ_file.py'
Nov 24 18:34:27 compute-0 sudo[171998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:27 compute-0 python3.9[172000]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:34:27 compute-0 sudo[171998]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:34:27 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:34:27 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:34:27 compute-0 ceph-mon[74927]: pgmap v557: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:27 compute-0 sudo[172150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jairbhvpsbfzdeaeqhopetbpxewctlhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009267.7039526-34-157202731456011/AnsiballZ_file.py'
Nov 24 18:34:27 compute-0 sudo[172150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:28 compute-0 python3.9[172152]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:34:28 compute-0 sudo[172150]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:28 compute-0 sudo[172302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xraqyihspqkhgsrefxhrejwtkkoafsvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009268.3009381-34-66762211762175/AnsiballZ_file.py'
Nov 24 18:34:28 compute-0 sudo[172302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:28 compute-0 python3.9[172304]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:34:28 compute-0 sudo[172302]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:29 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v558: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:29 compute-0 ceph-mon[74927]: pgmap v558: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:29 compute-0 python3.9[172454]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:34:30 compute-0 sudo[172604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghbmiudvcvvevskdpzrzoivfgataxbgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009269.7437472-78-162716200295385/AnsiballZ_seboolean.py'
Nov 24 18:34:30 compute-0 sudo[172604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:30 compute-0 python3.9[172606]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 24 18:34:31 compute-0 sudo[172604]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:31 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v559: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:31 compute-0 ceph-mon[74927]: pgmap v559: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:31 compute-0 python3.9[172756]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:34:32 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:34:32 compute-0 python3.9[172877]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764009271.267652-86-268222752231914/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:34:33 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v560: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:33 compute-0 ceph-mon[74927]: pgmap v560: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:33 compute-0 python3.9[173027]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:34:34 compute-0 python3.9[173149]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764009272.8752599-101-243795067466894/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:34:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:34:34
Nov 24 18:34:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:34:34 compute-0 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 18:34:34 compute-0 ceph-mgr[75218]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', '.rgw.root', 'images', 'vms', 'default.rgw.control', 'volumes', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta']
Nov 24 18:34:34 compute-0 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 18:34:34 compute-0 sudo[173299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhqomcmylilpgnpxzouzlmfqceiseahl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009274.3327036-118-40843719831940/AnsiballZ_setup.py'
Nov 24 18:34:34 compute-0 sudo[173299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:34:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:34:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:34:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:34:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:34:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:34:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:34:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:34:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:34:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:34:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:34:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:34:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:34:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:34:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:34:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:34:34 compute-0 python3.9[173301]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 18:34:35 compute-0 sudo[173299]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:35 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v561: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:35 compute-0 ceph-mon[74927]: pgmap v561: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:35 compute-0 ceph-mgr[75218]: client.0 ms_handle_reset on v2:192.168.122.100:6800/536471675
Nov 24 18:34:35 compute-0 sudo[173383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bncjrrrrxgiyshkmpqutlqwfdcrpkgbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009274.3327036-118-40843719831940/AnsiballZ_dnf.py'
Nov 24 18:34:35 compute-0 sudo[173383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:35 compute-0 python3.9[173385]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 18:34:37 compute-0 sudo[173383]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:37 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v562: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:37 compute-0 ceph-mon[74927]: pgmap v562: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:34:38 compute-0 sudo[173536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epfcqexrpifrrvxxvcqsrleeqshtdstc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009277.4043674-130-120621055590691/AnsiballZ_systemd.py'
Nov 24 18:34:38 compute-0 sudo[173536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:38 compute-0 python3.9[173538]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 18:34:38 compute-0 sudo[173536]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:39 compute-0 python3.9[173691]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:34:39 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v563: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:39 compute-0 ceph-mon[74927]: pgmap v563: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:39 compute-0 python3.9[173812]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764009278.6302657-138-211746784057465/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:34:40 compute-0 python3.9[173962]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:34:41 compute-0 python3.9[174083]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764009279.9707673-138-92510014303264/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:34:41 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v564: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:41 compute-0 ceph-mon[74927]: pgmap v564: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:42 compute-0 python3.9[174233]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:34:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:34:43 compute-0 python3.9[174354]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764009282.0292325-182-96257055751031/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:34:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:34:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:34:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:34:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:34:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:34:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:34:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:34:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:34:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:34:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:34:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:34:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:34:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 18:34:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:34:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:34:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:34:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 18:34:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:34:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 18:34:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:34:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:34:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:34:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 18:34:43 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v565: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:43 compute-0 ceph-mon[74927]: pgmap v565: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:43 compute-0 python3.9[174504]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:34:44 compute-0 python3.9[174625]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764009283.1474743-182-227912343828892/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:34:44 compute-0 python3.9[174775]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:34:44 compute-0 ovn_controller[169892]: 2025-11-24T18:34:44Z|00025|memory|INFO|16256 kB peak resident set size after 29.8 seconds
Nov 24 18:34:44 compute-0 ovn_controller[169892]: 2025-11-24T18:34:44Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Nov 24 18:34:44 compute-0 podman[174776]: 2025-11-24 18:34:44.993821472 +0000 UTC m=+0.088347661 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 18:34:45 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v566: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:45 compute-0 ceph-mon[74927]: pgmap v566: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:45 compute-0 sudo[174951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyfihtomuqlqmvawyculvvfnjveuewit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009285.2035353-220-100327640417417/AnsiballZ_file.py'
Nov 24 18:34:45 compute-0 sudo[174951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:45 compute-0 python3.9[174953]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:34:45 compute-0 sudo[174951]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:46 compute-0 sudo[175103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-splghcgidfwuqordjvpiifzlxmcvudch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009285.9063523-228-233096788318958/AnsiballZ_stat.py'
Nov 24 18:34:46 compute-0 sudo[175103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:46 compute-0 python3.9[175105]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:34:46 compute-0 sudo[175103]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:46 compute-0 sudo[175181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqrabkjkconstegzevtywojvnnqqdvkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009285.9063523-228-233096788318958/AnsiballZ_file.py'
Nov 24 18:34:46 compute-0 sudo[175181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:46 compute-0 python3.9[175183]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:34:46 compute-0 sudo[175181]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:47 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v567: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:47 compute-0 sudo[175333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqdjxhsyoqopsvukasqtdxojhhlxalix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009287.0108616-228-166999123789866/AnsiballZ_stat.py'
Nov 24 18:34:47 compute-0 sudo[175333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:47 compute-0 ceph-mon[74927]: pgmap v567: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:47 compute-0 python3.9[175335]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:34:47 compute-0 sudo[175333]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:34:47 compute-0 sudo[175411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gskldtlomqwbwzqitcihxagakkgbyubv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009287.0108616-228-166999123789866/AnsiballZ_file.py'
Nov 24 18:34:47 compute-0 sudo[175411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:47 compute-0 python3.9[175413]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:34:47 compute-0 sudo[175411]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:48 compute-0 sudo[175563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqzptysmpbqacbkqzpaprwvhiesmvjao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009288.1224704-251-134747331201769/AnsiballZ_file.py'
Nov 24 18:34:48 compute-0 sudo[175563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:48 compute-0 python3.9[175565]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:34:48 compute-0 sudo[175563]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:49 compute-0 sudo[175715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssmnuifqcqumijrvyccbougizlbkqysh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009288.7435932-259-12013864639406/AnsiballZ_stat.py'
Nov 24 18:34:49 compute-0 sudo[175715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:49 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v568: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:49 compute-0 python3.9[175717]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:34:49 compute-0 sudo[175715]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:49 compute-0 ceph-mon[74927]: pgmap v568: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:49 compute-0 sudo[175793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfdhfkxlmgfxwxcaxmxtlynxwbkrylnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009288.7435932-259-12013864639406/AnsiballZ_file.py'
Nov 24 18:34:49 compute-0 sudo[175793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:49 compute-0 python3.9[175795]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:34:49 compute-0 sudo[175793]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:50 compute-0 sudo[175945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqyklyhsqxgjpavwkukzplyunxzeastb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009289.9481719-271-127217881292524/AnsiballZ_stat.py'
Nov 24 18:34:50 compute-0 sudo[175945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:50 compute-0 python3.9[175947]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:34:50 compute-0 sudo[175945]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:50 compute-0 sudo[176023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilopgtkpfppquxqwqbfjbfcavoiyvtrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009289.9481719-271-127217881292524/AnsiballZ_file.py'
Nov 24 18:34:50 compute-0 sudo[176023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:50 compute-0 python3.9[176025]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:34:50 compute-0 sudo[176023]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:51 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v569: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:51 compute-0 ceph-mon[74927]: pgmap v569: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:51 compute-0 sudo[176175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbrtfqobqgvfvedspnrbzqdkjyhkmprj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009291.0639617-283-45316313140425/AnsiballZ_systemd.py'
Nov 24 18:34:51 compute-0 sudo[176175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:51 compute-0 python3.9[176177]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:34:51 compute-0 systemd[1]: Reloading.
Nov 24 18:34:51 compute-0 systemd-rc-local-generator[176206]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:34:51 compute-0 systemd-sysv-generator[176210]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:34:52 compute-0 sudo[176175]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:34:52 compute-0 sudo[176365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkjkxaglkptfumhmuhbolxvjjjcpgcdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009292.3796387-291-44935835954251/AnsiballZ_stat.py'
Nov 24 18:34:52 compute-0 sudo[176365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:52 compute-0 python3.9[176367]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:34:52 compute-0 sudo[176365]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:53 compute-0 sudo[176443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqzjueaeooojpujvimszyzsevdzdplhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009292.3796387-291-44935835954251/AnsiballZ_file.py'
Nov 24 18:34:53 compute-0 sudo[176443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:53 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v570: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:53 compute-0 ceph-mon[74927]: pgmap v570: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:53 compute-0 python3.9[176445]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:34:53 compute-0 sudo[176443]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:53 compute-0 sudo[176595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldvtikrhvfnsohwuquzskdbhofwywvyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009293.6138651-303-234774510196479/AnsiballZ_stat.py'
Nov 24 18:34:53 compute-0 sudo[176595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:54 compute-0 python3.9[176597]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:34:54 compute-0 sudo[176595]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:54 compute-0 sudo[176673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmjrwgjtxcszzccykfulpenoqjpwcpsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009293.6138651-303-234774510196479/AnsiballZ_file.py'
Nov 24 18:34:54 compute-0 sudo[176673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:54 compute-0 python3.9[176675]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:34:54 compute-0 sudo[176673]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:54 compute-0 auditd[701]: Audit daemon rotating log files
Nov 24 18:34:55 compute-0 sudo[176825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdhmvfbkcuvufahpaysihgotzntthiut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009294.8090656-315-91675050780359/AnsiballZ_systemd.py'
Nov 24 18:34:55 compute-0 sudo[176825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:55 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v571: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:55 compute-0 ceph-mon[74927]: pgmap v571: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:55 compute-0 python3.9[176827]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:34:55 compute-0 systemd[1]: Reloading.
Nov 24 18:34:55 compute-0 systemd-rc-local-generator[176856]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:34:55 compute-0 systemd-sysv-generator[176859]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:34:55 compute-0 systemd[1]: Starting Create netns directory...
Nov 24 18:34:55 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 24 18:34:55 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 24 18:34:55 compute-0 systemd[1]: Finished Create netns directory.
Nov 24 18:34:55 compute-0 sudo[176825]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:56 compute-0 sudo[177019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddymxzjdknkseghffkmqrfjgcmdrguue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009296.1333294-325-7312364488027/AnsiballZ_file.py'
Nov 24 18:34:56 compute-0 sudo[177019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:56 compute-0 python3.9[177021]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:34:56 compute-0 sudo[177019]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:57 compute-0 sudo[177172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icgiaiolzqvlihuyspigywgtuflygngy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009296.8196585-333-179039905916061/AnsiballZ_stat.py'
Nov 24 18:34:57 compute-0 sudo[177172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:57 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v572: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:57 compute-0 python3.9[177174]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:34:57 compute-0 sudo[177172]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:57 compute-0 ceph-mon[74927]: pgmap v572: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:34:57 compute-0 sudo[177296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyfdbhsepimqwiltznpvssahiicqvqsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009296.8196585-333-179039905916061/AnsiballZ_copy.py'
Nov 24 18:34:57 compute-0 sudo[177296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:58 compute-0 python3.9[177298]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764009296.8196585-333-179039905916061/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:34:58 compute-0 sudo[177296]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:58 compute-0 sshd-session[177067]: Connection closed by authenticating user root 80.94.95.116 port 36070 [preauth]
Nov 24 18:34:58 compute-0 sudo[177448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmmboferjyiyjofnvmmauokcdvtneqxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009298.3913884-350-85271412088757/AnsiballZ_file.py'
Nov 24 18:34:58 compute-0 sudo[177448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:59 compute-0 python3.9[177450]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:34:59 compute-0 sudo[177448]: pam_unix(sudo:session): session closed for user root
Nov 24 18:34:59 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v573: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:59 compute-0 ceph-mon[74927]: pgmap v573: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:34:59 compute-0 sudo[177600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cksrsgdvimqhnxghzwubdyoogehlfsks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009299.334048-358-155557701270394/AnsiballZ_stat.py'
Nov 24 18:34:59 compute-0 sudo[177600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:34:59 compute-0 python3.9[177602]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:34:59 compute-0 sudo[177600]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:00 compute-0 sudo[177723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcidoowijvbbrjsbfdpvviceauzxeead ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009299.334048-358-155557701270394/AnsiballZ_copy.py'
Nov 24 18:35:00 compute-0 sudo[177723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:00 compute-0 python3.9[177725]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764009299.334048-358-155557701270394/.source.json _original_basename=.dz3adk1y follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:35:00 compute-0 sudo[177723]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:01 compute-0 sudo[177875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhqrflgsmjhlsnpknrjwhmyzyjakfgln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009300.6905925-373-199598387851071/AnsiballZ_file.py'
Nov 24 18:35:01 compute-0 sudo[177875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:01 compute-0 python3.9[177877]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:35:01 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v574: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:01 compute-0 sudo[177875]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:01 compute-0 ceph-mon[74927]: pgmap v574: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:01 compute-0 sudo[178027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgpztxcodypskzhuttuvtthsinenzlvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009301.5064633-381-42323442685097/AnsiballZ_stat.py'
Nov 24 18:35:01 compute-0 sudo[178027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:01 compute-0 sudo[178027]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:02 compute-0 sudo[178150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfvvndggrvkkcpbfwdprboijqjgebbtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009301.5064633-381-42323442685097/AnsiballZ_copy.py'
Nov 24 18:35:02 compute-0 sudo[178150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:02 compute-0 sudo[178150]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:35:03 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v575: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:03 compute-0 ceph-mon[74927]: pgmap v575: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:03 compute-0 sudo[178302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mixylcgfflsgbkjnccbepuxypjfushkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009302.9257374-398-58642859511738/AnsiballZ_container_config_data.py'
Nov 24 18:35:03 compute-0 sudo[178302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:03 compute-0 python3.9[178304]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 24 18:35:03 compute-0 sudo[178302]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:04 compute-0 sudo[178454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ietunvppeusnrdemuvkzbhyjvzmlexdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009303.8998652-407-106067913194119/AnsiballZ_container_config_hash.py'
Nov 24 18:35:04 compute-0 sudo[178454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:04 compute-0 python3.9[178456]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 24 18:35:04 compute-0 sudo[178454]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:35:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:35:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:35:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:35:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:35:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:35:05 compute-0 sudo[178606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzrsutisuwhsbmgwytfwqwqhznilynzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009304.7291768-416-166749266814218/AnsiballZ_podman_container_info.py'
Nov 24 18:35:05 compute-0 sudo[178606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:05 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v576: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:05 compute-0 ceph-mon[74927]: pgmap v576: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:05 compute-0 python3.9[178608]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 24 18:35:05 compute-0 sudo[178606]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:06 compute-0 sudo[178784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsnetrvhrhgxqweocweihdqnfjwqydnk ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764009306.1997428-429-218006776331033/AnsiballZ_edpm_container_manage.py'
Nov 24 18:35:06 compute-0 sudo[178784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:07 compute-0 python3[178786]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 24 18:35:07 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v577: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:07 compute-0 ceph-mon[74927]: pgmap v577: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:35:09 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v578: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:09 compute-0 ceph-mon[74927]: pgmap v578: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:11 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v579: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:11 compute-0 ceph-mon[74927]: pgmap v579: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:35:13 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v580: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:13 compute-0 ceph-mon[74927]: pgmap v580: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:14 compute-0 podman[178798]: 2025-11-24 18:35:14.855966456 +0000 UTC m=+7.745127479 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 18:35:15 compute-0 podman[178918]: 2025-11-24 18:35:15.06893954 +0000 UTC m=+0.059304074 container create 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 24 18:35:15 compute-0 podman[178918]: 2025-11-24 18:35:15.037057144 +0000 UTC m=+0.027421728 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 18:35:15 compute-0 python3[178786]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 18:35:15 compute-0 sudo[178784]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:15 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v581: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:15 compute-0 ceph-mon[74927]: pgmap v581: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:15 compute-0 sudo[179116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kihlzzbzdmfchbbamjlxsxtwzrojkcrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009315.4679837-437-83960195683692/AnsiballZ_stat.py'
Nov 24 18:35:15 compute-0 sudo[179116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:15 compute-0 podman[179080]: 2025-11-24 18:35:15.956714629 +0000 UTC m=+0.172288463 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 24 18:35:16 compute-0 python3.9[179121]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:35:16 compute-0 sudo[179116]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:16 compute-0 sudo[179287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdwvhdeisawzanctahxixgtraapqzwgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009316.3470192-446-101605085257032/AnsiballZ_file.py'
Nov 24 18:35:16 compute-0 sudo[179287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:16 compute-0 python3.9[179289]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:35:16 compute-0 sudo[179287]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:17 compute-0 sudo[179363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtkfvvfrvdkqtpnyvpaqecjufoizfpms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009316.3470192-446-101605085257032/AnsiballZ_stat.py'
Nov 24 18:35:17 compute-0 sudo[179363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:17 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v582: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:17 compute-0 ceph-mon[74927]: pgmap v582: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:17 compute-0 python3.9[179365]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:35:17 compute-0 sudo[179363]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:35:17 compute-0 sudo[179514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjmbbuchbtdkzrhisedvomoqjpnmpxfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009317.4370267-446-3678531132004/AnsiballZ_copy.py'
Nov 24 18:35:17 compute-0 sudo[179514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:18 compute-0 python3.9[179516]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764009317.4370267-446-3678531132004/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:35:18 compute-0 sudo[179514]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:18 compute-0 sudo[179590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmzopfywbbfgutsvyrzelnsgosmwvbrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009317.4370267-446-3678531132004/AnsiballZ_systemd.py'
Nov 24 18:35:18 compute-0 sudo[179590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:18 compute-0 python3.9[179592]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 18:35:18 compute-0 systemd[1]: Reloading.
Nov 24 18:35:18 compute-0 systemd-sysv-generator[179625]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:35:18 compute-0 systemd-rc-local-generator[179620]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:35:19 compute-0 sudo[179590]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:19 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v583: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:19 compute-0 sudo[179702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykjusfbrebvtmkmoytdpicheqyffayuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009317.4370267-446-3678531132004/AnsiballZ_systemd.py'
Nov 24 18:35:19 compute-0 sudo[179702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:19 compute-0 ceph-mon[74927]: pgmap v583: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:19 compute-0 python3.9[179704]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:35:19 compute-0 systemd[1]: Reloading.
Nov 24 18:35:19 compute-0 systemd-rc-local-generator[179727]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:35:19 compute-0 systemd-sysv-generator[179730]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:35:19 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Nov 24 18:35:20 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:35:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fea43c7fdcc53abb769fc1d07f729dd8d9aeebb3386c60b0f057da8ac7da108/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 24 18:35:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fea43c7fdcc53abb769fc1d07f729dd8d9aeebb3386c60b0f057da8ac7da108/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 24 18:35:20 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf.
Nov 24 18:35:20 compute-0 podman[179744]: 2025-11-24 18:35:20.354382001 +0000 UTC m=+0.401691994 container init 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 18:35:20 compute-0 ovn_metadata_agent[179758]: + sudo -E kolla_set_configs
Nov 24 18:35:20 compute-0 podman[179744]: 2025-11-24 18:35:20.38638467 +0000 UTC m=+0.433694593 container start 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 24 18:35:20 compute-0 edpm-start-podman-container[179744]: ovn_metadata_agent
Nov 24 18:35:20 compute-0 edpm-start-podman-container[179743]: Creating additional drop-in dependency for "ovn_metadata_agent" (016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf)
Nov 24 18:35:20 compute-0 podman[179765]: 2025-11-24 18:35:20.63801703 +0000 UTC m=+0.243797247 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 18:35:20 compute-0 systemd[1]: Reloading.
Nov 24 18:35:20 compute-0 ovn_metadata_agent[179758]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 18:35:20 compute-0 ovn_metadata_agent[179758]: INFO:__main__:Validating config file
Nov 24 18:35:20 compute-0 ovn_metadata_agent[179758]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 18:35:20 compute-0 ovn_metadata_agent[179758]: INFO:__main__:Copying service configuration files
Nov 24 18:35:20 compute-0 ovn_metadata_agent[179758]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 24 18:35:20 compute-0 ovn_metadata_agent[179758]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 24 18:35:20 compute-0 ovn_metadata_agent[179758]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 24 18:35:20 compute-0 ovn_metadata_agent[179758]: INFO:__main__:Writing out command to execute
Nov 24 18:35:20 compute-0 ovn_metadata_agent[179758]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 24 18:35:20 compute-0 ovn_metadata_agent[179758]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 24 18:35:20 compute-0 ovn_metadata_agent[179758]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 24 18:35:20 compute-0 ovn_metadata_agent[179758]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 24 18:35:20 compute-0 ovn_metadata_agent[179758]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 24 18:35:20 compute-0 ovn_metadata_agent[179758]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 24 18:35:20 compute-0 ovn_metadata_agent[179758]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 24 18:35:20 compute-0 systemd-rc-local-generator[179825]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:35:20 compute-0 systemd-sysv-generator[179831]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:35:20 compute-0 ovn_metadata_agent[179758]: ++ cat /run_command
Nov 24 18:35:20 compute-0 ovn_metadata_agent[179758]: + CMD=neutron-ovn-metadata-agent
Nov 24 18:35:20 compute-0 ovn_metadata_agent[179758]: + ARGS=
Nov 24 18:35:20 compute-0 ovn_metadata_agent[179758]: + sudo kolla_copy_cacerts
Nov 24 18:35:20 compute-0 ovn_metadata_agent[179758]: + [[ ! -n '' ]]
Nov 24 18:35:20 compute-0 ovn_metadata_agent[179758]: + . kolla_extend_start
Nov 24 18:35:20 compute-0 ovn_metadata_agent[179758]: Running command: 'neutron-ovn-metadata-agent'
Nov 24 18:35:20 compute-0 ovn_metadata_agent[179758]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 24 18:35:20 compute-0 ovn_metadata_agent[179758]: + umask 0022
Nov 24 18:35:20 compute-0 ovn_metadata_agent[179758]: + exec neutron-ovn-metadata-agent
Nov 24 18:35:20 compute-0 systemd[1]: Started ovn_metadata_agent container.
Nov 24 18:35:21 compute-0 sudo[179702]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:21 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v584: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:21 compute-0 ceph-mon[74927]: pgmap v584: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:21 compute-0 sshd-session[171062]: Connection closed by 192.168.122.30 port 43756
Nov 24 18:35:21 compute-0 sshd-session[171059]: pam_unix(sshd:session): session closed for user zuul
Nov 24 18:35:21 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Nov 24 18:35:21 compute-0 systemd[1]: session-50.scope: Consumed 54.354s CPU time.
Nov 24 18:35:21 compute-0 systemd-logind[822]: Session 50 logged out. Waiting for processes to exit.
Nov 24 18:35:21 compute-0 systemd-logind[822]: Removed session 50.
Nov 24 18:35:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.675 179763 INFO neutron.common.config [-] Logging enabled!
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.676 179763 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.677 179763 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.677 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.677 179763 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.677 179763 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.678 179763 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.678 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.678 179763 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.678 179763 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.678 179763 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.678 179763 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.679 179763 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.679 179763 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.679 179763 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.679 179763 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.679 179763 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.679 179763 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.680 179763 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.680 179763 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.680 179763 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.680 179763 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.680 179763 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.680 179763 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.680 179763 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.681 179763 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.681 179763 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.681 179763 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.681 179763 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.681 179763 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.681 179763 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.682 179763 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.682 179763 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.682 179763 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.682 179763 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.682 179763 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.682 179763 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.682 179763 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.683 179763 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.683 179763 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.683 179763 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.684 179763 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.684 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.684 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.684 179763 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.684 179763 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.684 179763 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.685 179763 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.685 179763 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.685 179763 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.685 179763 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.685 179763 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.685 179763 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.686 179763 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.686 179763 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.686 179763 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.686 179763 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.686 179763 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.686 179763 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.686 179763 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.687 179763 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.687 179763 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.687 179763 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.687 179763 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.687 179763 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.688 179763 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.688 179763 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.688 179763 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.688 179763 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.688 179763 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.689 179763 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.689 179763 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.689 179763 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.689 179763 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.689 179763 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.689 179763 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.690 179763 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.690 179763 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.690 179763 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.690 179763 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.690 179763 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.690 179763 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.691 179763 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.691 179763 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.691 179763 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.691 179763 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.691 179763 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.692 179763 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.692 179763 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.692 179763 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.692 179763 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.692 179763 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.692 179763 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.692 179763 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.693 179763 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.693 179763 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.693 179763 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.693 179763 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.693 179763 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.693 179763 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.693 179763 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.693 179763 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.694 179763 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.694 179763 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.694 179763 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.694 179763 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.694 179763 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.694 179763 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.695 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.695 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.695 179763 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.695 179763 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.695 179763 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.695 179763 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.696 179763 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.696 179763 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.696 179763 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.696 179763 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.696 179763 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.697 179763 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.697 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.697 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.697 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.697 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.697 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.697 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.698 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.698 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.698 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.698 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.698 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.698 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.698 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.699 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.699 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.699 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.699 179763 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.699 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.699 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.700 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.700 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.700 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.700 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.700 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.700 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.700 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.701 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.701 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.701 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.701 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.701 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.701 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.701 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.702 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.702 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.702 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.702 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.702 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.702 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.702 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.703 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.703 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.703 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.703 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.703 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.703 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.704 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.704 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.704 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.704 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.704 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.704 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.704 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.705 179763 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.705 179763 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.705 179763 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.705 179763 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.705 179763 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.705 179763 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.706 179763 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.706 179763 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.706 179763 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.706 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.706 179763 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.706 179763 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.707 179763 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.707 179763 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.707 179763 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.707 179763 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.707 179763 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.707 179763 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.707 179763 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.708 179763 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.708 179763 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.708 179763 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.708 179763 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.708 179763 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.708 179763 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.708 179763 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.709 179763 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.709 179763 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.709 179763 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.709 179763 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.709 179763 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.709 179763 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.710 179763 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.710 179763 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.710 179763 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.710 179763 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.710 179763 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.710 179763 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.711 179763 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.711 179763 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.711 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.711 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.711 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.711 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.712 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.712 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.712 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.712 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.712 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.712 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.713 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.713 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.713 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.713 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.713 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.713 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.713 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.714 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.714 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.714 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.714 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.714 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.714 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.714 179763 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.715 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.715 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.715 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.715 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.715 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.716 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.716 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.716 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.716 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.716 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.716 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.717 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.717 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.717 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.717 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.717 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.718 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.718 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.718 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.718 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.718 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.718 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.719 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.719 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.719 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.719 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.719 179763 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.719 179763 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.719 179763 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.720 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.720 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.720 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.720 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.720 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.720 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.720 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.721 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.721 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.721 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.721 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.721 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.721 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.721 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.722 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.722 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.722 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.722 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.722 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.722 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.722 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.723 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.723 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.723 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.723 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.723 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.724 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.724 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.724 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.724 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.724 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.724 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.725 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.725 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.725 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.725 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.725 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.725 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.735 179763 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.735 179763 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.735 179763 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.736 179763 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.736 179763 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.750 179763 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 302e9f34-0427-4ff9-a29b-2fc7b5250666 (UUID: 302e9f34-0427-4ff9-a29b-2fc7b5250666) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.778 179763 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.778 179763 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.779 179763 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.779 179763 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.782 179763 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.789 179763 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.795 179763 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '302e9f34-0427-4ff9-a29b-2fc7b5250666'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7ff10f88f880>], external_ids={}, name=302e9f34-0427-4ff9-a29b-2fc7b5250666, nb_cfg_timestamp=1764009263291, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.796 179763 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7ff10f88fb20>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.797 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.797 179763 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.798 179763 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.798 179763 INFO oslo_service.service [-] Starting 1 workers
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.802 179763 DEBUG oslo_service.service [-] Started child 179867 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.806 179763 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpzjm8w9ae/privsep.sock']
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.806 179867 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-232090'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.834 179867 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.835 179867 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.835 179867 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.839 179867 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.846 179867 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 24 18:35:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.853 179867 INFO eventlet.wsgi.server [-] (179867) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Nov 24 18:35:23 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v585: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:23 compute-0 ceph-mon[74927]: pgmap v585: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:23 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 24 18:35:23 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:23.484 179763 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 24 18:35:23 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:23.485 179763 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpzjm8w9ae/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 24 18:35:23 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:23.362 179872 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 24 18:35:23 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:23.367 179872 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 24 18:35:23 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:23.368 179872 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Nov 24 18:35:23 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:23.369 179872 INFO oslo.privsep.daemon [-] privsep daemon running as pid 179872
Nov 24 18:35:23 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:23.488 179872 DEBUG oslo.privsep.daemon [-] privsep: reply[ec2670db-2ec1-479b-a468-f74e6ab5802f]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 18:35:23 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:23.932 179872 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:35:23 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:23.933 179872 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:35:23 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:23.933 179872 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.423 179872 DEBUG oslo.privsep.daemon [-] privsep: reply[d8bd70c4-f519-4e09-8b45-70e17bf08459]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.425 179763 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=302e9f34-0427-4ff9-a29b-2fc7b5250666, column=external_ids, values=({'neutron:ovn-metadata-id': 'b0697a09-6663-5123-a0f9-534f577dc986'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.438 179763 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=302e9f34-0427-4ff9-a29b-2fc7b5250666, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.446 179763 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.447 179763 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.447 179763 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.447 179763 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.447 179763 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.447 179763 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.447 179763 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.447 179763 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.448 179763 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.448 179763 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.448 179763 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.448 179763 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.448 179763 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.448 179763 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.449 179763 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.449 179763 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.449 179763 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.449 179763 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.449 179763 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.449 179763 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.449 179763 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.450 179763 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.450 179763 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.450 179763 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.450 179763 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.450 179763 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.450 179763 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.451 179763 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.451 179763 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.451 179763 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.451 179763 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.451 179763 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.451 179763 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.451 179763 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.452 179763 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.452 179763 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.452 179763 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.452 179763 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.452 179763 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.452 179763 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.453 179763 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.453 179763 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.453 179763 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.453 179763 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.453 179763 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.453 179763 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.453 179763 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.454 179763 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.454 179763 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.454 179763 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.454 179763 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.454 179763 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.454 179763 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.455 179763 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.455 179763 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.455 179763 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.455 179763 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.455 179763 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.455 179763 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.455 179763 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.456 179763 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.456 179763 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.456 179763 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.456 179763 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.456 179763 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.456 179763 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.457 179763 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.457 179763 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.457 179763 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.457 179763 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.457 179763 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.457 179763 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.457 179763 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.458 179763 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.458 179763 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.458 179763 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.458 179763 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.458 179763 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.458 179763 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.459 179763 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.459 179763 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.459 179763 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.459 179763 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.459 179763 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.459 179763 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.459 179763 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.460 179763 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.460 179763 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.460 179763 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.460 179763 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.460 179763 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.460 179763 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.460 179763 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.461 179763 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.461 179763 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.461 179763 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.461 179763 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.461 179763 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.461 179763 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.461 179763 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.461 179763 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.462 179763 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.462 179763 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.462 179763 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.462 179763 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.462 179763 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.462 179763 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.463 179763 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.463 179763 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.463 179763 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.463 179763 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.463 179763 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.463 179763 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.464 179763 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.464 179763 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.464 179763 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.464 179763 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.464 179763 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.464 179763 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.464 179763 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.465 179763 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.465 179763 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.465 179763 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.465 179763 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.465 179763 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.465 179763 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.466 179763 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.466 179763 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.466 179763 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.466 179763 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.466 179763 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.466 179763 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.467 179763 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.467 179763 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.467 179763 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.467 179763 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.467 179763 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.467 179763 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.467 179763 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.468 179763 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.468 179763 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.468 179763 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.468 179763 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.468 179763 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.468 179763 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.469 179763 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.469 179763 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.469 179763 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.469 179763 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.469 179763 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.469 179763 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.469 179763 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.470 179763 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.470 179763 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.470 179763 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.470 179763 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.470 179763 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.470 179763 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.471 179763 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.471 179763 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.471 179763 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.471 179763 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.471 179763 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.472 179763 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.472 179763 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.472 179763 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.472 179763 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.472 179763 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.472 179763 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.472 179763 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.473 179763 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.473 179763 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.473 179763 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.473 179763 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.473 179763 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.473 179763 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.473 179763 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.474 179763 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.474 179763 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.474 179763 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.474 179763 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.474 179763 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.474 179763 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.475 179763 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.475 179763 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.475 179763 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.475 179763 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.475 179763 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.475 179763 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.476 179763 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.476 179763 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.476 179763 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.476 179763 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.476 179763 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.476 179763 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.476 179763 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.477 179763 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.477 179763 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.477 179763 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.477 179763 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.477 179763 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.477 179763 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.477 179763 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.478 179763 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.478 179763 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.478 179763 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.478 179763 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.478 179763 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.478 179763 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.478 179763 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.479 179763 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.479 179763 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.479 179763 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.479 179763 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.479 179763 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.479 179763 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.479 179763 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.480 179763 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.480 179763 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.480 179763 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.480 179763 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.480 179763 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.480 179763 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.480 179763 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.481 179763 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.481 179763 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.481 179763 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.481 179763 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.481 179763 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.481 179763 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.482 179763 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.482 179763 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.482 179763 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.482 179763 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.482 179763 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.482 179763 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.482 179763 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.483 179763 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.483 179763 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.483 179763 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.483 179763 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.483 179763 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.483 179763 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.484 179763 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.484 179763 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.484 179763 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.484 179763 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.484 179763 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.484 179763 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.484 179763 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.485 179763 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.485 179763 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.485 179763 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.485 179763 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.485 179763 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.485 179763 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.485 179763 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.486 179763 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.486 179763 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.486 179763 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.486 179763 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.486 179763 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.486 179763 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.486 179763 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.487 179763 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.487 179763 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.487 179763 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.487 179763 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.487 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.488 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.488 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.488 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.488 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.488 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.488 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.489 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.489 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.489 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.489 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.489 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.489 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.490 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.490 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.490 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.490 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.490 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.490 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.491 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.491 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.491 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.491 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.491 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.491 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.492 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.492 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.492 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.492 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.492 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.493 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.493 179763 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.493 179763 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.493 179763 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.493 179763 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:35:24 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.494 179763 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 24 18:35:25 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v586: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:25 compute-0 ceph-mon[74927]: pgmap v586: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:27 compute-0 sshd-session[179877]: Accepted publickey for zuul from 192.168.122.30 port 51980 ssh2: ECDSA SHA256:jrbRhTO0vQ5011EeK1ZbrK2vK+fcQyIDL9kUBqDHBYY
Nov 24 18:35:27 compute-0 systemd-logind[822]: New session 51 of user zuul.
Nov 24 18:35:27 compute-0 systemd[1]: Started Session 51 of User zuul.
Nov 24 18:35:27 compute-0 sshd-session[179877]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 18:35:27 compute-0 sudo[179881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:35:27 compute-0 sudo[179881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:35:27 compute-0 sudo[179881]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:27 compute-0 sudo[179927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:35:27 compute-0 sudo[179927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:35:27 compute-0 sudo[179927]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:27 compute-0 sudo[179981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:35:27 compute-0 sudo[179981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:35:27 compute-0 sudo[179981]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:27 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v587: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:27 compute-0 sudo[180008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 18:35:27 compute-0 sudo[180008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:35:27 compute-0 ceph-mon[74927]: pgmap v587: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:35:27 compute-0 sudo[180008]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:35:27 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:35:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:35:27 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:35:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:35:27 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:35:27 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev bb65d5c7-2907-4727-8532-bf05ec685320 does not exist
Nov 24 18:35:27 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 305b82d0-da00-4420-b15a-d7ac31f796cc does not exist
Nov 24 18:35:27 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev fa0d32c2-47d2-481f-8664-e6d29df4c5df does not exist
Nov 24 18:35:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 18:35:27 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:35:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 18:35:27 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:35:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:35:27 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:35:27 compute-0 sudo[180163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:35:27 compute-0 sudo[180163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:35:27 compute-0 sudo[180163]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:27 compute-0 sudo[180188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:35:27 compute-0 sudo[180188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:35:27 compute-0 sudo[180188]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:27 compute-0 sudo[180213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:35:27 compute-0 sudo[180213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:35:27 compute-0 sudo[180213]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:27 compute-0 sudo[180238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 18:35:28 compute-0 sudo[180238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:35:28 compute-0 python3.9[180162]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:35:28 compute-0 podman[180312]: 2025-11-24 18:35:28.324109131 +0000 UTC m=+0.041787762 container create 93172b563331195361c7ab29fcea129d7e7216cff14d8f54b48d071072f6daae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cartwright, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 18:35:28 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:35:28 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:35:28 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:35:28 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:35:28 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:35:28 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:35:28 compute-0 systemd[1]: Started libpod-conmon-93172b563331195361c7ab29fcea129d7e7216cff14d8f54b48d071072f6daae.scope.
Nov 24 18:35:28 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:35:28 compute-0 podman[180312]: 2025-11-24 18:35:28.394876828 +0000 UTC m=+0.112555459 container init 93172b563331195361c7ab29fcea129d7e7216cff14d8f54b48d071072f6daae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Nov 24 18:35:28 compute-0 podman[180312]: 2025-11-24 18:35:28.304461726 +0000 UTC m=+0.022140377 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:35:28 compute-0 podman[180312]: 2025-11-24 18:35:28.402425504 +0000 UTC m=+0.120104135 container start 93172b563331195361c7ab29fcea129d7e7216cff14d8f54b48d071072f6daae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cartwright, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:35:28 compute-0 interesting_cartwright[180348]: 167 167
Nov 24 18:35:28 compute-0 podman[180312]: 2025-11-24 18:35:28.407205912 +0000 UTC m=+0.124884533 container attach 93172b563331195361c7ab29fcea129d7e7216cff14d8f54b48d071072f6daae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cartwright, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 24 18:35:28 compute-0 systemd[1]: libpod-93172b563331195361c7ab29fcea129d7e7216cff14d8f54b48d071072f6daae.scope: Deactivated successfully.
Nov 24 18:35:28 compute-0 podman[180312]: 2025-11-24 18:35:28.407732015 +0000 UTC m=+0.125410636 container died 93172b563331195361c7ab29fcea129d7e7216cff14d8f54b48d071072f6daae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:35:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b56bacb404663eac72ce302dadcd18fe93d08a5a28d8766d0c0c54202eda236-merged.mount: Deactivated successfully.
Nov 24 18:35:28 compute-0 podman[180312]: 2025-11-24 18:35:28.446304727 +0000 UTC m=+0.163983358 container remove 93172b563331195361c7ab29fcea129d7e7216cff14d8f54b48d071072f6daae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cartwright, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 18:35:28 compute-0 systemd[1]: libpod-conmon-93172b563331195361c7ab29fcea129d7e7216cff14d8f54b48d071072f6daae.scope: Deactivated successfully.
Nov 24 18:35:28 compute-0 podman[180395]: 2025-11-24 18:35:28.631304712 +0000 UTC m=+0.046610041 container create 655951ff7d544590983275eac954372af3e2b465b33bcdc9c186a1279a0e6cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_nightingale, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 24 18:35:28 compute-0 systemd[1]: Started libpod-conmon-655951ff7d544590983275eac954372af3e2b465b33bcdc9c186a1279a0e6cdf.scope.
Nov 24 18:35:28 compute-0 podman[180395]: 2025-11-24 18:35:28.610193751 +0000 UTC m=+0.025499100 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:35:28 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4255c04d1af4709861b101b749bb78d02e9a6669024a0723231b49549b2b8ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4255c04d1af4709861b101b749bb78d02e9a6669024a0723231b49549b2b8ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4255c04d1af4709861b101b749bb78d02e9a6669024a0723231b49549b2b8ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4255c04d1af4709861b101b749bb78d02e9a6669024a0723231b49549b2b8ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4255c04d1af4709861b101b749bb78d02e9a6669024a0723231b49549b2b8ca/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:35:28 compute-0 podman[180395]: 2025-11-24 18:35:28.734043317 +0000 UTC m=+0.149348666 container init 655951ff7d544590983275eac954372af3e2b465b33bcdc9c186a1279a0e6cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_nightingale, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 24 18:35:28 compute-0 podman[180395]: 2025-11-24 18:35:28.746985457 +0000 UTC m=+0.162290826 container start 655951ff7d544590983275eac954372af3e2b465b33bcdc9c186a1279a0e6cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 18:35:28 compute-0 podman[180395]: 2025-11-24 18:35:28.751410796 +0000 UTC m=+0.166716125 container attach 655951ff7d544590983275eac954372af3e2b465b33bcdc9c186a1279a0e6cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 18:35:29 compute-0 sudo[180518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozfjqlosbsodntqbmquufijiseughfoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009328.566016-34-119008957486177/AnsiballZ_command.py'
Nov 24 18:35:29 compute-0 sudo[180518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:29 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v588: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:29 compute-0 python3.9[180520]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:35:29 compute-0 ceph-mon[74927]: pgmap v588: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:29 compute-0 sudo[180518]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:29 compute-0 vigorous_nightingale[180440]: --> passed data devices: 0 physical, 3 LVM
Nov 24 18:35:29 compute-0 vigorous_nightingale[180440]: --> relative data size: 1.0
Nov 24 18:35:29 compute-0 vigorous_nightingale[180440]: --> All data devices are unavailable
Nov 24 18:35:29 compute-0 systemd[1]: libpod-655951ff7d544590983275eac954372af3e2b465b33bcdc9c186a1279a0e6cdf.scope: Deactivated successfully.
Nov 24 18:35:29 compute-0 podman[180395]: 2025-11-24 18:35:29.827272865 +0000 UTC m=+1.242578214 container died 655951ff7d544590983275eac954372af3e2b465b33bcdc9c186a1279a0e6cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:35:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4255c04d1af4709861b101b749bb78d02e9a6669024a0723231b49549b2b8ca-merged.mount: Deactivated successfully.
Nov 24 18:35:29 compute-0 podman[180395]: 2025-11-24 18:35:29.88424323 +0000 UTC m=+1.299548559 container remove 655951ff7d544590983275eac954372af3e2b465b33bcdc9c186a1279a0e6cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 24 18:35:29 compute-0 systemd[1]: libpod-conmon-655951ff7d544590983275eac954372af3e2b465b33bcdc9c186a1279a0e6cdf.scope: Deactivated successfully.
Nov 24 18:35:29 compute-0 sudo[180238]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:29 compute-0 sudo[180647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:35:29 compute-0 sudo[180647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:35:29 compute-0 sudo[180647]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:30 compute-0 sudo[180672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:35:30 compute-0 sudo[180672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:35:30 compute-0 sudo[180672]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:30 compute-0 sudo[180697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:35:30 compute-0 sudo[180697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:35:30 compute-0 sudo[180697]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:30 compute-0 sudo[180722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- lvm list --format json
Nov 24 18:35:30 compute-0 sudo[180722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:35:30 compute-0 podman[180834]: 2025-11-24 18:35:30.435121455 +0000 UTC m=+0.035935248 container create 92ffe8c19ada179d8f3246f42fbe90b952b8428ef623af2784d483b283828cfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:35:30 compute-0 systemd[1]: Started libpod-conmon-92ffe8c19ada179d8f3246f42fbe90b952b8428ef623af2784d483b283828cfc.scope.
Nov 24 18:35:30 compute-0 sudo[180877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qriqkdxzwpjrfkmpdbqxinhmiqotvghi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009329.8317893-45-156812893703231/AnsiballZ_systemd_service.py'
Nov 24 18:35:30 compute-0 sudo[180877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:30 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:35:30 compute-0 podman[180834]: 2025-11-24 18:35:30.514475783 +0000 UTC m=+0.115289596 container init 92ffe8c19ada179d8f3246f42fbe90b952b8428ef623af2784d483b283828cfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 24 18:35:30 compute-0 podman[180834]: 2025-11-24 18:35:30.420718679 +0000 UTC m=+0.021532492 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:35:30 compute-0 podman[180834]: 2025-11-24 18:35:30.526485799 +0000 UTC m=+0.127299592 container start 92ffe8c19ada179d8f3246f42fbe90b952b8428ef623af2784d483b283828cfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 24 18:35:30 compute-0 gallant_chaum[180879]: 167 167
Nov 24 18:35:30 compute-0 systemd[1]: libpod-92ffe8c19ada179d8f3246f42fbe90b952b8428ef623af2784d483b283828cfc.scope: Deactivated successfully.
Nov 24 18:35:30 compute-0 podman[180834]: 2025-11-24 18:35:30.533073522 +0000 UTC m=+0.133887315 container attach 92ffe8c19ada179d8f3246f42fbe90b952b8428ef623af2784d483b283828cfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 18:35:30 compute-0 podman[180834]: 2025-11-24 18:35:30.533738068 +0000 UTC m=+0.134551871 container died 92ffe8c19ada179d8f3246f42fbe90b952b8428ef623af2784d483b283828cfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 24 18:35:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf246aaccd6f7f625919c9d20f59b9751ee51871b419d8498f3033342aeb0250-merged.mount: Deactivated successfully.
Nov 24 18:35:30 compute-0 podman[180834]: 2025-11-24 18:35:30.571594233 +0000 UTC m=+0.172408036 container remove 92ffe8c19ada179d8f3246f42fbe90b952b8428ef623af2784d483b283828cfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:35:30 compute-0 systemd[1]: libpod-conmon-92ffe8c19ada179d8f3246f42fbe90b952b8428ef623af2784d483b283828cfc.scope: Deactivated successfully.
Nov 24 18:35:30 compute-0 podman[180904]: 2025-11-24 18:35:30.71532897 +0000 UTC m=+0.037223660 container create 30336203ebeacd924e35bbdc1377bb75706d6bbe7b504a2c9930567b4de2b3d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_roentgen, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:35:30 compute-0 systemd[1]: Started libpod-conmon-30336203ebeacd924e35bbdc1377bb75706d6bbe7b504a2c9930567b4de2b3d0.scope.
Nov 24 18:35:30 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:35:30 compute-0 python3.9[180881]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 18:35:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80d900be46907f3a22811acc2e88d6034065ac495e4f56d6db6655b920520e17/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:35:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80d900be46907f3a22811acc2e88d6034065ac495e4f56d6db6655b920520e17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:35:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80d900be46907f3a22811acc2e88d6034065ac495e4f56d6db6655b920520e17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:35:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80d900be46907f3a22811acc2e88d6034065ac495e4f56d6db6655b920520e17/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:35:30 compute-0 systemd[1]: Reloading.
Nov 24 18:35:30 compute-0 podman[180904]: 2025-11-24 18:35:30.797840496 +0000 UTC m=+0.119735196 container init 30336203ebeacd924e35bbdc1377bb75706d6bbe7b504a2c9930567b4de2b3d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_roentgen, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:35:30 compute-0 podman[180904]: 2025-11-24 18:35:30.699798736 +0000 UTC m=+0.021693456 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:35:30 compute-0 podman[180904]: 2025-11-24 18:35:30.809015501 +0000 UTC m=+0.130910191 container start 30336203ebeacd924e35bbdc1377bb75706d6bbe7b504a2c9930567b4de2b3d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_roentgen, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 18:35:30 compute-0 podman[180904]: 2025-11-24 18:35:30.812088907 +0000 UTC m=+0.133983617 container attach 30336203ebeacd924e35bbdc1377bb75706d6bbe7b504a2c9930567b4de2b3d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_roentgen, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 18:35:30 compute-0 systemd-rc-local-generator[180950]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:35:30 compute-0 systemd-sysv-generator[180954]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:35:31 compute-0 sudo[180877]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:31 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v589: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:31 compute-0 ceph-mon[74927]: pgmap v589: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]: {
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:     "0": [
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:         {
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:             "devices": [
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:                 "/dev/loop3"
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:             ],
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:             "lv_name": "ceph_lv0",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:             "lv_size": "21470642176",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:             "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:             "name": "ceph_lv0",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:             "tags": {
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:                 "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:                 "ceph.cluster_name": "ceph",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:                 "ceph.crush_device_class": "",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:                 "ceph.encrypted": "0",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:                 "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:                 "ceph.osd_id": "0",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:                 "ceph.type": "block",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:                 "ceph.vdo": "0"
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:             },
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:             "type": "block",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:             "vg_name": "ceph_vg0"
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:         }
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:     ],
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:     "1": [
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:         {
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:             "devices": [
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:                 "/dev/loop4"
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:             ],
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:             "lv_name": "ceph_lv1",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:             "lv_size": "21470642176",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:             "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:             "name": "ceph_lv1",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:             "tags": {
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:                 "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:                 "ceph.cluster_name": "ceph",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:                 "ceph.crush_device_class": "",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:                 "ceph.encrypted": "0",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:                 "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:                 "ceph.osd_id": "1",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:                 "ceph.type": "block",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:                 "ceph.vdo": "0"
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:             },
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:             "type": "block",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:             "vg_name": "ceph_vg1"
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:         }
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:     ],
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:     "2": [
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:         {
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:             "devices": [
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:                 "/dev/loop5"
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:             ],
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:             "lv_name": "ceph_lv2",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:             "lv_size": "21470642176",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:             "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:             "name": "ceph_lv2",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:             "tags": {
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:                 "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:                 "ceph.cluster_name": "ceph",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:                 "ceph.crush_device_class": "",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:                 "ceph.encrypted": "0",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:                 "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:                 "ceph.osd_id": "2",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:                 "ceph.type": "block",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:                 "ceph.vdo": "0"
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:             },
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:             "type": "block",
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:             "vg_name": "ceph_vg2"
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:         }
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]:     ]
Nov 24 18:35:31 compute-0 frosty_roentgen[180921]: }
Nov 24 18:35:31 compute-0 systemd[1]: libpod-30336203ebeacd924e35bbdc1377bb75706d6bbe7b504a2c9930567b4de2b3d0.scope: Deactivated successfully.
Nov 24 18:35:31 compute-0 podman[180904]: 2025-11-24 18:35:31.550239653 +0000 UTC m=+0.872134373 container died 30336203ebeacd924e35bbdc1377bb75706d6bbe7b504a2c9930567b4de2b3d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_roentgen, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:35:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-80d900be46907f3a22811acc2e88d6034065ac495e4f56d6db6655b920520e17-merged.mount: Deactivated successfully.
Nov 24 18:35:31 compute-0 podman[180904]: 2025-11-24 18:35:31.617540094 +0000 UTC m=+0.939434784 container remove 30336203ebeacd924e35bbdc1377bb75706d6bbe7b504a2c9930567b4de2b3d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 18:35:31 compute-0 systemd[1]: libpod-conmon-30336203ebeacd924e35bbdc1377bb75706d6bbe7b504a2c9930567b4de2b3d0.scope: Deactivated successfully.
Nov 24 18:35:31 compute-0 sudo[180722]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:31 compute-0 sudo[181099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:35:31 compute-0 sudo[181099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:35:31 compute-0 sudo[181099]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:31 compute-0 sudo[181151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:35:31 compute-0 sudo[181151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:35:31 compute-0 sudo[181151]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:31 compute-0 sudo[181176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:35:31 compute-0 sudo[181176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:35:31 compute-0 sudo[181176]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:31 compute-0 python3.9[181148]: ansible-ansible.builtin.service_facts Invoked
Nov 24 18:35:31 compute-0 sudo[181201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- raw list --format json
Nov 24 18:35:31 compute-0 sudo[181201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:35:31 compute-0 network[181242]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 18:35:31 compute-0 network[181243]: 'network-scripts' will be removed from distribution in near future.
Nov 24 18:35:31 compute-0 network[181244]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 18:35:32 compute-0 podman[181291]: 2025-11-24 18:35:32.270527378 +0000 UTC m=+0.042862379 container create 20f6ce94f95e1d403e131a8ada6b85e3d51c39291a06f8a466f3a9c1fd1fb0d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:35:32 compute-0 podman[181291]: 2025-11-24 18:35:32.248783251 +0000 UTC m=+0.021118302 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:35:32 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:35:32 compute-0 systemd[1]: Started libpod-conmon-20f6ce94f95e1d403e131a8ada6b85e3d51c39291a06f8a466f3a9c1fd1fb0d1.scope.
Nov 24 18:35:32 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:35:32 compute-0 podman[181291]: 2025-11-24 18:35:32.734475417 +0000 UTC m=+0.506810438 container init 20f6ce94f95e1d403e131a8ada6b85e3d51c39291a06f8a466f3a9c1fd1fb0d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_meninsky, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 18:35:32 compute-0 podman[181291]: 2025-11-24 18:35:32.748350379 +0000 UTC m=+0.520685410 container start 20f6ce94f95e1d403e131a8ada6b85e3d51c39291a06f8a466f3a9c1fd1fb0d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:35:32 compute-0 podman[181291]: 2025-11-24 18:35:32.75244744 +0000 UTC m=+0.524782461 container attach 20f6ce94f95e1d403e131a8ada6b85e3d51c39291a06f8a466f3a9c1fd1fb0d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:35:32 compute-0 systemd[1]: libpod-20f6ce94f95e1d403e131a8ada6b85e3d51c39291a06f8a466f3a9c1fd1fb0d1.scope: Deactivated successfully.
Nov 24 18:35:32 compute-0 inspiring_meninsky[181308]: 167 167
Nov 24 18:35:32 compute-0 conmon[181308]: conmon 20f6ce94f95e1d403e13 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-20f6ce94f95e1d403e131a8ada6b85e3d51c39291a06f8a466f3a9c1fd1fb0d1.scope/container/memory.events
Nov 24 18:35:32 compute-0 podman[181291]: 2025-11-24 18:35:32.753735662 +0000 UTC m=+0.526070663 container died 20f6ce94f95e1d403e131a8ada6b85e3d51c39291a06f8a466f3a9c1fd1fb0d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_meninsky, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:35:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-6acfa450286cb5df5123856feb8280db0e8998697f450ce8da84368e08d46ff0-merged.mount: Deactivated successfully.
Nov 24 18:35:32 compute-0 podman[181291]: 2025-11-24 18:35:32.792400446 +0000 UTC m=+0.564735447 container remove 20f6ce94f95e1d403e131a8ada6b85e3d51c39291a06f8a466f3a9c1fd1fb0d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:35:32 compute-0 systemd[1]: libpod-conmon-20f6ce94f95e1d403e131a8ada6b85e3d51c39291a06f8a466f3a9c1fd1fb0d1.scope: Deactivated successfully.
Nov 24 18:35:32 compute-0 podman[181343]: 2025-11-24 18:35:32.98022427 +0000 UTC m=+0.049059505 container create d877fa2c54687a5fa31738f15b0fd92de1aee29c9b0c3dc78444f1ed28486f0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_sinoussi, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:35:33 compute-0 systemd[1]: Started libpod-conmon-d877fa2c54687a5fa31738f15b0fd92de1aee29c9b0c3dc78444f1ed28486f0a.scope.
Nov 24 18:35:33 compute-0 podman[181343]: 2025-11-24 18:35:32.958523297 +0000 UTC m=+0.027358572 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:35:33 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:35:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bc90ef8e381f3c8a3a7ea7b6a186e71c68adf4f904921723bb1abced4d3a9f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:35:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bc90ef8e381f3c8a3a7ea7b6a186e71c68adf4f904921723bb1abced4d3a9f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:35:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bc90ef8e381f3c8a3a7ea7b6a186e71c68adf4f904921723bb1abced4d3a9f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:35:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bc90ef8e381f3c8a3a7ea7b6a186e71c68adf4f904921723bb1abced4d3a9f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:35:33 compute-0 podman[181343]: 2025-11-24 18:35:33.070642739 +0000 UTC m=+0.139477984 container init d877fa2c54687a5fa31738f15b0fd92de1aee29c9b0c3dc78444f1ed28486f0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_sinoussi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:35:33 compute-0 podman[181343]: 2025-11-24 18:35:33.077162649 +0000 UTC m=+0.145997884 container start d877fa2c54687a5fa31738f15b0fd92de1aee29c9b0c3dc78444f1ed28486f0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_sinoussi, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:35:33 compute-0 podman[181343]: 2025-11-24 18:35:33.08003773 +0000 UTC m=+0.148872965 container attach d877fa2c54687a5fa31738f15b0fd92de1aee29c9b0c3dc78444f1ed28486f0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_sinoussi, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 24 18:35:33 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v590: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:33 compute-0 ceph-mon[74927]: pgmap v590: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:34 compute-0 beautiful_sinoussi[181365]: {
Nov 24 18:35:34 compute-0 beautiful_sinoussi[181365]:     "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 18:35:34 compute-0 beautiful_sinoussi[181365]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:35:34 compute-0 beautiful_sinoussi[181365]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 18:35:34 compute-0 beautiful_sinoussi[181365]:         "osd_id": 0,
Nov 24 18:35:34 compute-0 beautiful_sinoussi[181365]:         "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:35:34 compute-0 beautiful_sinoussi[181365]:         "type": "bluestore"
Nov 24 18:35:34 compute-0 beautiful_sinoussi[181365]:     },
Nov 24 18:35:34 compute-0 beautiful_sinoussi[181365]:     "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 18:35:34 compute-0 beautiful_sinoussi[181365]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:35:34 compute-0 beautiful_sinoussi[181365]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 18:35:34 compute-0 beautiful_sinoussi[181365]:         "osd_id": 1,
Nov 24 18:35:34 compute-0 beautiful_sinoussi[181365]:         "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:35:34 compute-0 beautiful_sinoussi[181365]:         "type": "bluestore"
Nov 24 18:35:34 compute-0 beautiful_sinoussi[181365]:     },
Nov 24 18:35:34 compute-0 beautiful_sinoussi[181365]:     "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 18:35:34 compute-0 beautiful_sinoussi[181365]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:35:34 compute-0 beautiful_sinoussi[181365]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 18:35:34 compute-0 beautiful_sinoussi[181365]:         "osd_id": 2,
Nov 24 18:35:34 compute-0 beautiful_sinoussi[181365]:         "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:35:34 compute-0 beautiful_sinoussi[181365]:         "type": "bluestore"
Nov 24 18:35:34 compute-0 beautiful_sinoussi[181365]:     }
Nov 24 18:35:34 compute-0 beautiful_sinoussi[181365]: }
Nov 24 18:35:34 compute-0 systemd[1]: libpod-d877fa2c54687a5fa31738f15b0fd92de1aee29c9b0c3dc78444f1ed28486f0a.scope: Deactivated successfully.
Nov 24 18:35:34 compute-0 podman[181343]: 2025-11-24 18:35:34.067099759 +0000 UTC m=+1.135935004 container died d877fa2c54687a5fa31738f15b0fd92de1aee29c9b0c3dc78444f1ed28486f0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:35:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-2bc90ef8e381f3c8a3a7ea7b6a186e71c68adf4f904921723bb1abced4d3a9f4-merged.mount: Deactivated successfully.
Nov 24 18:35:34 compute-0 podman[181343]: 2025-11-24 18:35:34.136513713 +0000 UTC m=+1.205348948 container remove d877fa2c54687a5fa31738f15b0fd92de1aee29c9b0c3dc78444f1ed28486f0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:35:34 compute-0 systemd[1]: libpod-conmon-d877fa2c54687a5fa31738f15b0fd92de1aee29c9b0c3dc78444f1ed28486f0a.scope: Deactivated successfully.
Nov 24 18:35:34 compute-0 sudo[181201]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:34 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:35:34 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:35:34 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:35:34 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:35:34 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 0fe8ff92-d150-41d6-934c-aeb086d54638 does not exist
Nov 24 18:35:34 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev dbce541f-d731-4e67-95c6-6fe7f725ca4b does not exist
Nov 24 18:35:34 compute-0 sudo[181441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:35:34 compute-0 sudo[181441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:35:34 compute-0 sudo[181441]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:34 compute-0 sudo[181466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:35:34 compute-0 sudo[181466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:35:34 compute-0 sudo[181466]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:35:34
Nov 24 18:35:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:35:34 compute-0 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 18:35:34 compute-0 ceph-mgr[75218]: [balancer INFO root] pools ['vms', 'images', 'cephfs.cephfs.meta', 'volumes', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'default.rgw.control']
Nov 24 18:35:34 compute-0 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 18:35:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:35:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:35:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:35:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:35:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:35:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:35:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:35:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:35:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:35:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:35:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:35:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:35:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:35:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:35:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:35:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:35:35 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:35:35 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:35:35 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v591: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:36 compute-0 ceph-mon[74927]: pgmap v591: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:36 compute-0 sudo[181696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmwagrmsbogyxxqyomcademdvuxsqplm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009336.2114685-64-227359969261644/AnsiballZ_systemd_service.py'
Nov 24 18:35:36 compute-0 sudo[181696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:36 compute-0 python3.9[181698]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:35:36 compute-0 sudo[181696]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:37 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v592: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:37 compute-0 ceph-mon[74927]: pgmap v592: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:37 compute-0 sudo[181849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctyqlpdiuuaevvnweplhxckjoshxinob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009337.1203096-64-189363095290667/AnsiballZ_systemd_service.py'
Nov 24 18:35:37 compute-0 sudo[181849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:35:37 compute-0 python3.9[181851]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:35:37 compute-0 sudo[181849]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:38 compute-0 sudo[182002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhihkispkovdpakzhetkcnslfryacxrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009337.8888025-64-265443547652157/AnsiballZ_systemd_service.py'
Nov 24 18:35:38 compute-0 sudo[182002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:38 compute-0 python3.9[182004]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:35:38 compute-0 sudo[182002]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:39 compute-0 sudo[182155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwyuwiiydkxxwujnliczmrsbivkdrvfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009338.7098603-64-26137151276952/AnsiballZ_systemd_service.py'
Nov 24 18:35:39 compute-0 sudo[182155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:39 compute-0 python3.9[182157]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:35:39 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v593: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:39 compute-0 sudo[182155]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:39 compute-0 ceph-mon[74927]: pgmap v593: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:39 compute-0 sudo[182308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euqwvpzeuoyckwaocnvvhinawiudnosv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009339.460959-64-267578653809598/AnsiballZ_systemd_service.py'
Nov 24 18:35:39 compute-0 sudo[182308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:40 compute-0 python3.9[182310]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:35:40 compute-0 sudo[182308]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:40 compute-0 sudo[182461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weafswxnrcijnjskurhwcufkiedoaxwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009340.3027136-64-53014272813662/AnsiballZ_systemd_service.py'
Nov 24 18:35:40 compute-0 sudo[182461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:40 compute-0 python3.9[182463]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:35:40 compute-0 sudo[182461]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:41 compute-0 sudo[182614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orzkmsoftiotlwopcgjsrzmeipsyreuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009340.9986846-64-111918197033973/AnsiballZ_systemd_service.py'
Nov 24 18:35:41 compute-0 sudo[182614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:41 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v594: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:41 compute-0 ceph-mon[74927]: pgmap v594: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:41 compute-0 python3.9[182616]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:35:41 compute-0 sudo[182614]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:42 compute-0 sudo[182767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycrbrujdladxfxbpojxnotedppapoilc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009341.9280539-116-55177410721298/AnsiballZ_file.py'
Nov 24 18:35:42 compute-0 sudo[182767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:42 compute-0 python3.9[182769]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:35:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:35:42 compute-0 sudo[182767]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:43 compute-0 sudo[182919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsewkhjkkoruzpsfptcqsltbyohgguit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009342.7467167-116-141242992471399/AnsiballZ_file.py'
Nov 24 18:35:43 compute-0 sudo[182919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:35:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:35:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:35:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:35:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:35:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:35:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:35:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:35:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:35:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:35:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:35:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:35:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 18:35:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:35:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:35:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:35:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 18:35:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:35:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 18:35:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:35:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:35:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:35:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 18:35:43 compute-0 python3.9[182921]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:35:43 compute-0 sudo[182919]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:43 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v595: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:43 compute-0 ceph-mon[74927]: pgmap v595: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:43 compute-0 sudo[183071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skvuflwsbcfprolowgpyqqcpiwfhgslp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009343.3648965-116-238736073388879/AnsiballZ_file.py'
Nov 24 18:35:43 compute-0 sudo[183071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:43 compute-0 python3.9[183073]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:35:43 compute-0 sudo[183071]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:44 compute-0 sudo[183223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctgiucffiynkeemuayvzqhhrrbdcvdyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009343.9495711-116-135484081016149/AnsiballZ_file.py'
Nov 24 18:35:44 compute-0 sudo[183223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:44 compute-0 python3.9[183225]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:35:44 compute-0 sudo[183223]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:44 compute-0 sudo[183375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgpldqxqcrziwepbclitsqzmnjsdegfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009344.5178514-116-241864929512219/AnsiballZ_file.py'
Nov 24 18:35:44 compute-0 sudo[183375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:45 compute-0 python3.9[183377]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:35:45 compute-0 sudo[183375]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:45 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v596: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:45 compute-0 ceph-mon[74927]: pgmap v596: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:45 compute-0 sudo[183527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tghiuprjqnekbseqwghbdwsjnvwaanma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009345.1699185-116-250738335062846/AnsiballZ_file.py'
Nov 24 18:35:45 compute-0 sudo[183527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:45 compute-0 python3.9[183529]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:35:45 compute-0 sudo[183527]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:46 compute-0 sudo[183694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfxwcigzbkrtneosqfadyualzsqpugbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009345.8685582-116-14065906024404/AnsiballZ_file.py'
Nov 24 18:35:46 compute-0 sudo[183694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:46 compute-0 podman[183653]: 2025-11-24 18:35:46.265703353 +0000 UTC m=+0.102447556 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_managed=true)
Nov 24 18:35:46 compute-0 python3.9[183701]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:35:46 compute-0 sudo[183694]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:46 compute-0 sudo[183857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqmvpkyaykqwbzcvcrfsbwpxdalziotl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009346.6163418-166-278753228768961/AnsiballZ_file.py'
Nov 24 18:35:46 compute-0 sudo[183857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:47 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v597: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:47 compute-0 python3.9[183859]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:35:47 compute-0 sudo[183857]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:47 compute-0 ceph-mon[74927]: pgmap v597: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:35:47 compute-0 sudo[184009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbstaxnpcmebbpwrguswwjyywkjheogm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009347.5271618-166-18564557867947/AnsiballZ_file.py'
Nov 24 18:35:47 compute-0 sudo[184009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:47 compute-0 python3.9[184011]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:35:47 compute-0 sudo[184009]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:48 compute-0 sudo[184161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqtekipcogiuufnrxqhruifadpbpztol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009348.1076071-166-20362576902839/AnsiballZ_file.py'
Nov 24 18:35:48 compute-0 sudo[184161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:48 compute-0 python3.9[184163]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:35:48 compute-0 sudo[184161]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:48 compute-0 sudo[184313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elhhdgknfmrtnirbvfbriomqgjzdfxvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009348.6764674-166-60768598621371/AnsiballZ_file.py'
Nov 24 18:35:48 compute-0 sudo[184313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:49 compute-0 python3.9[184315]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:35:49 compute-0 sudo[184313]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:49 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v598: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:49 compute-0 ceph-mon[74927]: pgmap v598: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:49 compute-0 sudo[184465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uauwbpdjeekhttzinmoywzdlcmzhiiur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009349.288875-166-138741528277313/AnsiballZ_file.py'
Nov 24 18:35:49 compute-0 sudo[184465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:49 compute-0 python3.9[184467]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:35:49 compute-0 sudo[184465]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:50 compute-0 sudo[184617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejtpqnujczuhnfjdgnsugmgnghvfsybe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009349.8837798-166-62099865740174/AnsiballZ_file.py'
Nov 24 18:35:50 compute-0 sudo[184617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:50 compute-0 python3.9[184619]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:35:50 compute-0 sudo[184617]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:50 compute-0 sudo[184784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shsyjagfagcnahsiqtdoebrfrdopsfda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009350.5011115-166-113914264448998/AnsiballZ_file.py'
Nov 24 18:35:50 compute-0 sudo[184784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:50 compute-0 podman[184743]: 2025-11-24 18:35:50.817665367 +0000 UTC m=+0.060273671 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 24 18:35:51 compute-0 python3.9[184788]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:35:51 compute-0 sudo[184784]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:51 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v599: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:51 compute-0 ceph-mon[74927]: pgmap v599: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:51 compute-0 sudo[184939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmbomiueyjyofegqxseprowryhoqqdbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009351.284237-217-207188199978722/AnsiballZ_command.py'
Nov 24 18:35:51 compute-0 sudo[184939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:51 compute-0 python3.9[184941]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:35:51 compute-0 sudo[184939]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:35:52 compute-0 python3.9[185093]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 18:35:53 compute-0 sudo[185243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhlsvirzfvdkrwltemutyhrbbgqemrqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009352.9573607-235-173945647531333/AnsiballZ_systemd_service.py'
Nov 24 18:35:53 compute-0 sudo[185243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:53 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v600: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:53 compute-0 ceph-mon[74927]: pgmap v600: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:53 compute-0 python3.9[185245]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 18:35:53 compute-0 systemd[1]: Reloading.
Nov 24 18:35:53 compute-0 systemd-rc-local-generator[185272]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:35:53 compute-0 systemd-sysv-generator[185275]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:35:53 compute-0 sudo[185243]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:54 compute-0 sudo[185430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrshllohcjcfzulolgbsriznlhqoaynv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009354.0552921-243-199722948909308/AnsiballZ_command.py'
Nov 24 18:35:54 compute-0 sudo[185430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:54 compute-0 python3.9[185432]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:35:54 compute-0 sudo[185430]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:55 compute-0 sudo[185583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fweksrhbzzailuuassyuyzsswgkhtwwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009354.749228-243-4597443674736/AnsiballZ_command.py'
Nov 24 18:35:55 compute-0 sudo[185583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:55 compute-0 python3.9[185585]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:35:55 compute-0 sudo[185583]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:55 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v601: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:55 compute-0 ceph-mon[74927]: pgmap v601: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:55 compute-0 sudo[185736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmsjigecsdvfifodfmpjnwvmgmeievva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009355.468958-243-63119704473701/AnsiballZ_command.py'
Nov 24 18:35:55 compute-0 sudo[185736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:56 compute-0 python3.9[185738]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:35:56 compute-0 sudo[185736]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:56 compute-0 sudo[185889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aogzenzcgolqipbntfqgutvsfbyfsula ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009356.2077165-243-88066729538493/AnsiballZ_command.py'
Nov 24 18:35:56 compute-0 sudo[185889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:56 compute-0 python3.9[185891]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:35:56 compute-0 sudo[185889]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:57 compute-0 sudo[186042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljgdgvefzpelcystzrjcdltrckiacphn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009356.771165-243-170274240529031/AnsiballZ_command.py'
Nov 24 18:35:57 compute-0 sudo[186042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:57 compute-0 python3.9[186044]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:35:57 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v602: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:57 compute-0 sudo[186042]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:57 compute-0 ceph-mon[74927]: pgmap v602: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:35:57 compute-0 sudo[186195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuxlzgjitmuqahxbsdvcpnrhaccicrlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009357.421759-243-133920863943992/AnsiballZ_command.py'
Nov 24 18:35:57 compute-0 sudo[186195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:58 compute-0 python3.9[186197]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:35:58 compute-0 sudo[186195]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:58 compute-0 sudo[186348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oanjiecpayfcpwxnnezllmzvpeigwhxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009358.3239455-243-195673358551949/AnsiballZ_command.py'
Nov 24 18:35:58 compute-0 sudo[186348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:58 compute-0 python3.9[186350]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:35:58 compute-0 sudo[186348]: pam_unix(sudo:session): session closed for user root
Nov 24 18:35:59 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v603: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:59 compute-0 ceph-mon[74927]: pgmap v603: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:35:59 compute-0 sudo[186501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npulocksircxwqqrhfcydemibgvagcwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009359.2915368-297-219962303431496/AnsiballZ_getent.py'
Nov 24 18:35:59 compute-0 sudo[186501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:35:59 compute-0 python3.9[186503]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 24 18:35:59 compute-0 sudo[186501]: pam_unix(sudo:session): session closed for user root
Nov 24 18:36:00 compute-0 sudo[186654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vifpxkqwdyxkyelvufblgghkumrkjkdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009360.1413298-305-146506493882043/AnsiballZ_group.py'
Nov 24 18:36:00 compute-0 sudo[186654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:36:00 compute-0 python3.9[186656]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 18:36:00 compute-0 groupadd[186657]: group added to /etc/group: name=libvirt, GID=42473
Nov 24 18:36:00 compute-0 groupadd[186657]: group added to /etc/gshadow: name=libvirt
Nov 24 18:36:00 compute-0 groupadd[186657]: new group: name=libvirt, GID=42473
Nov 24 18:36:00 compute-0 sudo[186654]: pam_unix(sudo:session): session closed for user root
Nov 24 18:36:01 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v604: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:01 compute-0 ceph-mon[74927]: pgmap v604: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:01 compute-0 sudo[186812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbpreqhotlwzfpixdxulahrgcgitumfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009361.0366602-313-71382144955430/AnsiballZ_user.py'
Nov 24 18:36:01 compute-0 sudo[186812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:36:01 compute-0 python3.9[186814]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 24 18:36:01 compute-0 useradd[186816]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Nov 24 18:36:01 compute-0 sudo[186812]: pam_unix(sudo:session): session closed for user root
Nov 24 18:36:02 compute-0 sudo[186972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqcdeqwphttuqyyhirlditsmulcgpfev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009362.2557318-324-181550923034687/AnsiballZ_setup.py'
Nov 24 18:36:02 compute-0 sudo[186972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:36:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:36:02 compute-0 python3.9[186974]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 18:36:03 compute-0 sudo[186972]: pam_unix(sudo:session): session closed for user root
Nov 24 18:36:03 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v605: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:03 compute-0 ceph-mon[74927]: pgmap v605: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:03 compute-0 sudo[187056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niyvjsrwvqhyvxgigwffjukqrhdrzbpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009362.2557318-324-181550923034687/AnsiballZ_dnf.py'
Nov 24 18:36:03 compute-0 sudo[187056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:36:03 compute-0 python3.9[187058]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 18:36:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:36:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:36:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:36:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:36:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:36:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:36:05 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v606: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:05 compute-0 ceph-mon[74927]: pgmap v606: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:07 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v607: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:07 compute-0 ceph-mon[74927]: pgmap v607: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:36:09 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v608: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:09 compute-0 ceph-mon[74927]: pgmap v608: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:11 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v609: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:11 compute-0 ceph-mon[74927]: pgmap v609: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:36:13 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v610: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:13 compute-0 ceph-mon[74927]: pgmap v610: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:15 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v611: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:15 compute-0 ceph-mon[74927]: pgmap v611: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:16 compute-0 podman[187109]: 2025-11-24 18:36:16.997385258 +0000 UTC m=+0.092135913 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 18:36:17 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v612: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:17 compute-0 ceph-mon[74927]: pgmap v612: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:36:19 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v613: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:19 compute-0 ceph-mon[74927]: pgmap v613: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:20 compute-0 podman[187134]: 2025-11-24 18:36:20.958638022 +0000 UTC m=+0.054310084 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 24 18:36:21 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v614: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:22 compute-0 ceph-mon[74927]: pgmap v614: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:36:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:36:22.727 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:36:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:36:22.728 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:36:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:36:22.728 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:36:23 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v615: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:24 compute-0 ceph-mon[74927]: pgmap v615: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:25 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v616: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:26 compute-0 ceph-mon[74927]: pgmap v616: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:27 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v617: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:36:28 compute-0 ceph-mon[74927]: pgmap v617: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:29 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v618: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:30 compute-0 ceph-mon[74927]: pgmap v618: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:31 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v619: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:32 compute-0 ceph-mon[74927]: pgmap v619: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:32 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:36:33 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v620: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:34 compute-0 sudo[187327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:36:34 compute-0 sudo[187327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:36:34 compute-0 sudo[187327]: pam_unix(sudo:session): session closed for user root
Nov 24 18:36:34 compute-0 ceph-mon[74927]: pgmap v620: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:34 compute-0 sudo[187352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:36:34 compute-0 sudo[187352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:36:34 compute-0 sudo[187352]: pam_unix(sudo:session): session closed for user root
Nov 24 18:36:34 compute-0 sudo[187377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:36:34 compute-0 sudo[187377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:36:34 compute-0 sudo[187377]: pam_unix(sudo:session): session closed for user root
Nov 24 18:36:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:36:34
Nov 24 18:36:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:36:34 compute-0 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 18:36:34 compute-0 ceph-mgr[75218]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'backups', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', 'images', 'volumes', 'default.rgw.control', 'default.rgw.meta']
Nov 24 18:36:34 compute-0 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 18:36:34 compute-0 sudo[187402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 18:36:34 compute-0 sudo[187402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:36:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:36:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:36:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:36:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:36:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:36:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:36:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:36:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:36:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:36:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:36:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:36:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:36:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:36:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:36:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:36:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:36:35 compute-0 sudo[187402]: pam_unix(sudo:session): session closed for user root
Nov 24 18:36:35 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:36:35 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:36:35 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:36:35 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:36:35 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:36:35 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:36:35 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev cf065138-3ddc-4124-b90f-207e37a89ef9 does not exist
Nov 24 18:36:35 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 73580fb4-e01c-476c-b099-beaa02f1c5ec does not exist
Nov 24 18:36:35 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 6e1cde58-8e1c-465a-a1df-7a720aa8ca06 does not exist
Nov 24 18:36:35 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 18:36:35 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:36:35 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 18:36:35 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:36:35 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:36:35 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:36:35 compute-0 sudo[187457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:36:35 compute-0 sudo[187457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:36:35 compute-0 sudo[187457]: pam_unix(sudo:session): session closed for user root
Nov 24 18:36:35 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v621: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:35 compute-0 sudo[187482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:36:35 compute-0 sudo[187482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:36:35 compute-0 sudo[187482]: pam_unix(sudo:session): session closed for user root
Nov 24 18:36:35 compute-0 sudo[187507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:36:35 compute-0 sudo[187507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:36:35 compute-0 sudo[187507]: pam_unix(sudo:session): session closed for user root
Nov 24 18:36:35 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:36:35 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:36:35 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:36:35 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:36:35 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:36:35 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:36:35 compute-0 sudo[187532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 18:36:35 compute-0 sudo[187532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:36:35 compute-0 podman[187598]: 2025-11-24 18:36:35.738085086 +0000 UTC m=+0.047942308 container create d596b78641045a72355f215b49e2058492c80e189467f33e97ae594d9363e002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_poincare, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 24 18:36:35 compute-0 systemd[1]: Started libpod-conmon-d596b78641045a72355f215b49e2058492c80e189467f33e97ae594d9363e002.scope.
Nov 24 18:36:35 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:36:35 compute-0 podman[187598]: 2025-11-24 18:36:35.709757611 +0000 UTC m=+0.019614843 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:36:35 compute-0 podman[187598]: 2025-11-24 18:36:35.813502898 +0000 UTC m=+0.123360110 container init d596b78641045a72355f215b49e2058492c80e189467f33e97ae594d9363e002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_poincare, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:36:35 compute-0 podman[187598]: 2025-11-24 18:36:35.821864553 +0000 UTC m=+0.131721765 container start d596b78641045a72355f215b49e2058492c80e189467f33e97ae594d9363e002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_poincare, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 24 18:36:35 compute-0 podman[187598]: 2025-11-24 18:36:35.826208339 +0000 UTC m=+0.136065551 container attach d596b78641045a72355f215b49e2058492c80e189467f33e97ae594d9363e002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_poincare, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 24 18:36:35 compute-0 elated_poincare[187614]: 167 167
Nov 24 18:36:35 compute-0 systemd[1]: libpod-d596b78641045a72355f215b49e2058492c80e189467f33e97ae594d9363e002.scope: Deactivated successfully.
Nov 24 18:36:35 compute-0 conmon[187614]: conmon d596b78641045a72355f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d596b78641045a72355f215b49e2058492c80e189467f33e97ae594d9363e002.scope/container/memory.events
Nov 24 18:36:35 compute-0 podman[187598]: 2025-11-24 18:36:35.83846374 +0000 UTC m=+0.148320972 container died d596b78641045a72355f215b49e2058492c80e189467f33e97ae594d9363e002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 18:36:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f3fe67bb07375c7ac942b4e16f431476c5de4f2f1e0add50aa3545380b1159b-merged.mount: Deactivated successfully.
Nov 24 18:36:35 compute-0 podman[187598]: 2025-11-24 18:36:35.885632598 +0000 UTC m=+0.195489820 container remove d596b78641045a72355f215b49e2058492c80e189467f33e97ae594d9363e002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:36:35 compute-0 systemd[1]: libpod-conmon-d596b78641045a72355f215b49e2058492c80e189467f33e97ae594d9363e002.scope: Deactivated successfully.
Nov 24 18:36:36 compute-0 podman[187637]: 2025-11-24 18:36:36.034695487 +0000 UTC m=+0.039133241 container create 3b9bfae0d071364546f4c3518c3ee417c8bd92668d176b09df5e83461c4fc936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pike, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 24 18:36:36 compute-0 systemd[1]: Started libpod-conmon-3b9bfae0d071364546f4c3518c3ee417c8bd92668d176b09df5e83461c4fc936.scope.
Nov 24 18:36:36 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:36:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bf01e8656f307c951776757ddbb5b27d3b1b0602a3bdac6ae7d183ec60dbbde/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:36:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bf01e8656f307c951776757ddbb5b27d3b1b0602a3bdac6ae7d183ec60dbbde/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:36:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bf01e8656f307c951776757ddbb5b27d3b1b0602a3bdac6ae7d183ec60dbbde/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:36:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bf01e8656f307c951776757ddbb5b27d3b1b0602a3bdac6ae7d183ec60dbbde/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:36:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bf01e8656f307c951776757ddbb5b27d3b1b0602a3bdac6ae7d183ec60dbbde/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:36:36 compute-0 podman[187637]: 2025-11-24 18:36:36.016684685 +0000 UTC m=+0.021122459 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:36:36 compute-0 podman[187637]: 2025-11-24 18:36:36.117619723 +0000 UTC m=+0.122057527 container init 3b9bfae0d071364546f4c3518c3ee417c8bd92668d176b09df5e83461c4fc936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 24 18:36:36 compute-0 podman[187637]: 2025-11-24 18:36:36.12892901 +0000 UTC m=+0.133366764 container start 3b9bfae0d071364546f4c3518c3ee417c8bd92668d176b09df5e83461c4fc936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pike, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 24 18:36:36 compute-0 podman[187637]: 2025-11-24 18:36:36.135952683 +0000 UTC m=+0.140390467 container attach 3b9bfae0d071364546f4c3518c3ee417c8bd92668d176b09df5e83461c4fc936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:36:37 compute-0 awesome_pike[187654]: --> passed data devices: 0 physical, 3 LVM
Nov 24 18:36:37 compute-0 awesome_pike[187654]: --> relative data size: 1.0
Nov 24 18:36:37 compute-0 awesome_pike[187654]: --> All data devices are unavailable
Nov 24 18:36:37 compute-0 systemd[1]: libpod-3b9bfae0d071364546f4c3518c3ee417c8bd92668d176b09df5e83461c4fc936.scope: Deactivated successfully.
Nov 24 18:36:37 compute-0 systemd[1]: libpod-3b9bfae0d071364546f4c3518c3ee417c8bd92668d176b09df5e83461c4fc936.scope: Consumed 1.026s CPU time.
Nov 24 18:36:37 compute-0 ceph-mon[74927]: pgmap v621: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:37 compute-0 podman[187688]: 2025-11-24 18:36:37.29001751 +0000 UTC m=+0.031447503 container died 3b9bfae0d071364546f4c3518c3ee417c8bd92668d176b09df5e83461c4fc936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:36:37 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v622: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bf01e8656f307c951776757ddbb5b27d3b1b0602a3bdac6ae7d183ec60dbbde-merged.mount: Deactivated successfully.
Nov 24 18:36:37 compute-0 podman[187688]: 2025-11-24 18:36:37.354465942 +0000 UTC m=+0.095895915 container remove 3b9bfae0d071364546f4c3518c3ee417c8bd92668d176b09df5e83461c4fc936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pike, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:36:37 compute-0 systemd[1]: libpod-conmon-3b9bfae0d071364546f4c3518c3ee417c8bd92668d176b09df5e83461c4fc936.scope: Deactivated successfully.
Nov 24 18:36:37 compute-0 sudo[187532]: pam_unix(sudo:session): session closed for user root
Nov 24 18:36:37 compute-0 sudo[187703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:36:37 compute-0 sudo[187703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:36:37 compute-0 sudo[187703]: pam_unix(sudo:session): session closed for user root
Nov 24 18:36:37 compute-0 sudo[187728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:36:37 compute-0 sudo[187728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:36:37 compute-0 sudo[187728]: pam_unix(sudo:session): session closed for user root
Nov 24 18:36:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:36:37 compute-0 sudo[187753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:36:37 compute-0 sudo[187753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:36:37 compute-0 sudo[187753]: pam_unix(sudo:session): session closed for user root
Nov 24 18:36:37 compute-0 sudo[187778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- lvm list --format json
Nov 24 18:36:37 compute-0 sudo[187778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:36:38 compute-0 podman[187843]: 2025-11-24 18:36:38.052882396 +0000 UTC m=+0.066918793 container create 52468178b66d1e6996fa8dc7172b77161c094fba63633b39b6cdeaa1b707b0d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_thompson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:36:38 compute-0 systemd[1]: Started libpod-conmon-52468178b66d1e6996fa8dc7172b77161c094fba63633b39b6cdeaa1b707b0d3.scope.
Nov 24 18:36:38 compute-0 podman[187843]: 2025-11-24 18:36:38.011270415 +0000 UTC m=+0.025306862 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:36:38 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:36:38 compute-0 podman[187843]: 2025-11-24 18:36:38.13897196 +0000 UTC m=+0.153008347 container init 52468178b66d1e6996fa8dc7172b77161c094fba63633b39b6cdeaa1b707b0d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 18:36:38 compute-0 podman[187843]: 2025-11-24 18:36:38.150837171 +0000 UTC m=+0.164873568 container start 52468178b66d1e6996fa8dc7172b77161c094fba63633b39b6cdeaa1b707b0d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_thompson, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:36:38 compute-0 podman[187843]: 2025-11-24 18:36:38.153961227 +0000 UTC m=+0.167997614 container attach 52468178b66d1e6996fa8dc7172b77161c094fba63633b39b6cdeaa1b707b0d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_thompson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 18:36:38 compute-0 magical_thompson[187860]: 167 167
Nov 24 18:36:38 compute-0 systemd[1]: libpod-52468178b66d1e6996fa8dc7172b77161c094fba63633b39b6cdeaa1b707b0d3.scope: Deactivated successfully.
Nov 24 18:36:38 compute-0 podman[187843]: 2025-11-24 18:36:38.157214217 +0000 UTC m=+0.171250684 container died 52468178b66d1e6996fa8dc7172b77161c094fba63633b39b6cdeaa1b707b0d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_thompson, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 18:36:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-889f06631066b0720c656135a2d9a27efa08ff14344f7e5a093904c4c88d4565-merged.mount: Deactivated successfully.
Nov 24 18:36:38 compute-0 podman[187843]: 2025-11-24 18:36:38.222154991 +0000 UTC m=+0.236191388 container remove 52468178b66d1e6996fa8dc7172b77161c094fba63633b39b6cdeaa1b707b0d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_thompson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:36:38 compute-0 systemd[1]: libpod-conmon-52468178b66d1e6996fa8dc7172b77161c094fba63633b39b6cdeaa1b707b0d3.scope: Deactivated successfully.
Nov 24 18:36:38 compute-0 podman[187886]: 2025-11-24 18:36:38.391748394 +0000 UTC m=+0.047833625 container create bfa0df0a3d82f92080c98f18167eaf4dc93691075aa035ec636e2dd63b7bc607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_margulis, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 24 18:36:38 compute-0 systemd[1]: Started libpod-conmon-bfa0df0a3d82f92080c98f18167eaf4dc93691075aa035ec636e2dd63b7bc607.scope.
Nov 24 18:36:38 compute-0 podman[187886]: 2025-11-24 18:36:38.371487567 +0000 UTC m=+0.027572788 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:36:38 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:36:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be1d6dcb4565225c2cabab6c084171defcb863906c93889aebbd3ab31131007e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:36:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be1d6dcb4565225c2cabab6c084171defcb863906c93889aebbd3ab31131007e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:36:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be1d6dcb4565225c2cabab6c084171defcb863906c93889aebbd3ab31131007e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:36:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be1d6dcb4565225c2cabab6c084171defcb863906c93889aebbd3ab31131007e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:36:38 compute-0 podman[187886]: 2025-11-24 18:36:38.478821722 +0000 UTC m=+0.134906943 container init bfa0df0a3d82f92080c98f18167eaf4dc93691075aa035ec636e2dd63b7bc607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_margulis, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:36:38 compute-0 podman[187886]: 2025-11-24 18:36:38.486358957 +0000 UTC m=+0.142444158 container start bfa0df0a3d82f92080c98f18167eaf4dc93691075aa035ec636e2dd63b7bc607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_margulis, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:36:38 compute-0 podman[187886]: 2025-11-24 18:36:38.488760696 +0000 UTC m=+0.144845897 container attach bfa0df0a3d82f92080c98f18167eaf4dc93691075aa035ec636e2dd63b7bc607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_margulis, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]: {
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:     "0": [
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:         {
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:             "devices": [
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:                 "/dev/loop3"
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:             ],
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:             "lv_name": "ceph_lv0",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:             "lv_size": "21470642176",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:             "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:             "name": "ceph_lv0",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:             "tags": {
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:                 "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:                 "ceph.cluster_name": "ceph",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:                 "ceph.crush_device_class": "",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:                 "ceph.encrypted": "0",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:                 "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:                 "ceph.osd_id": "0",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:                 "ceph.type": "block",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:                 "ceph.vdo": "0"
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:             },
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:             "type": "block",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:             "vg_name": "ceph_vg0"
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:         }
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:     ],
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:     "1": [
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:         {
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:             "devices": [
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:                 "/dev/loop4"
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:             ],
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:             "lv_name": "ceph_lv1",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:             "lv_size": "21470642176",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:             "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:             "name": "ceph_lv1",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:             "tags": {
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:                 "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:                 "ceph.cluster_name": "ceph",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:                 "ceph.crush_device_class": "",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:                 "ceph.encrypted": "0",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:                 "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:                 "ceph.osd_id": "1",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:                 "ceph.type": "block",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:                 "ceph.vdo": "0"
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:             },
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:             "type": "block",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:             "vg_name": "ceph_vg1"
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:         }
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:     ],
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:     "2": [
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:         {
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:             "devices": [
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:                 "/dev/loop5"
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:             ],
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:             "lv_name": "ceph_lv2",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:             "lv_size": "21470642176",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:             "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:             "name": "ceph_lv2",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:             "tags": {
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:                 "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:                 "ceph.cluster_name": "ceph",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:                 "ceph.crush_device_class": "",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:                 "ceph.encrypted": "0",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:                 "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:                 "ceph.osd_id": "2",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:                 "ceph.type": "block",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:                 "ceph.vdo": "0"
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:             },
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:             "type": "block",
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:             "vg_name": "ceph_vg2"
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:         }
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]:     ]
Nov 24 18:36:39 compute-0 xenodochial_margulis[187903]: }
Nov 24 18:36:39 compute-0 ceph-mon[74927]: pgmap v622: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:39 compute-0 systemd[1]: libpod-bfa0df0a3d82f92080c98f18167eaf4dc93691075aa035ec636e2dd63b7bc607.scope: Deactivated successfully.
Nov 24 18:36:39 compute-0 podman[187886]: 2025-11-24 18:36:39.277143798 +0000 UTC m=+0.933228999 container died bfa0df0a3d82f92080c98f18167eaf4dc93691075aa035ec636e2dd63b7bc607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_margulis, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 18:36:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-be1d6dcb4565225c2cabab6c084171defcb863906c93889aebbd3ab31131007e-merged.mount: Deactivated successfully.
Nov 24 18:36:39 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v623: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:39 compute-0 podman[187886]: 2025-11-24 18:36:39.327775471 +0000 UTC m=+0.983860672 container remove bfa0df0a3d82f92080c98f18167eaf4dc93691075aa035ec636e2dd63b7bc607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:36:39 compute-0 systemd[1]: libpod-conmon-bfa0df0a3d82f92080c98f18167eaf4dc93691075aa035ec636e2dd63b7bc607.scope: Deactivated successfully.
Nov 24 18:36:39 compute-0 sudo[187778]: pam_unix(sudo:session): session closed for user root
Nov 24 18:36:39 compute-0 sudo[187924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:36:39 compute-0 sudo[187924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:36:39 compute-0 sudo[187924]: pam_unix(sudo:session): session closed for user root
Nov 24 18:36:39 compute-0 sudo[187949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:36:39 compute-0 sudo[187949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:36:39 compute-0 sudo[187949]: pam_unix(sudo:session): session closed for user root
Nov 24 18:36:39 compute-0 sudo[187974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:36:39 compute-0 sudo[187974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:36:39 compute-0 sudo[187974]: pam_unix(sudo:session): session closed for user root
Nov 24 18:36:39 compute-0 sudo[187999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- raw list --format json
Nov 24 18:36:39 compute-0 sudo[187999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:36:39 compute-0 podman[188062]: 2025-11-24 18:36:39.959840846 +0000 UTC m=+0.035607145 container create d27ec91250054a9e69fcd95cca10b3c1e53226cb25074bdc24db946115046775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:36:39 compute-0 systemd[1]: Started libpod-conmon-d27ec91250054a9e69fcd95cca10b3c1e53226cb25074bdc24db946115046775.scope.
Nov 24 18:36:40 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:36:40 compute-0 podman[188062]: 2025-11-24 18:36:40.041179833 +0000 UTC m=+0.116946182 container init d27ec91250054a9e69fcd95cca10b3c1e53226cb25074bdc24db946115046775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 24 18:36:40 compute-0 podman[188062]: 2025-11-24 18:36:39.94492514 +0000 UTC m=+0.020691459 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:36:40 compute-0 podman[188062]: 2025-11-24 18:36:40.048835791 +0000 UTC m=+0.124602100 container start d27ec91250054a9e69fcd95cca10b3c1e53226cb25074bdc24db946115046775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bassi, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 24 18:36:40 compute-0 podman[188062]: 2025-11-24 18:36:40.05167032 +0000 UTC m=+0.127436639 container attach d27ec91250054a9e69fcd95cca10b3c1e53226cb25074bdc24db946115046775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Nov 24 18:36:40 compute-0 heuristic_bassi[188078]: 167 167
Nov 24 18:36:40 compute-0 systemd[1]: libpod-d27ec91250054a9e69fcd95cca10b3c1e53226cb25074bdc24db946115046775.scope: Deactivated successfully.
Nov 24 18:36:40 compute-0 podman[188062]: 2025-11-24 18:36:40.054339656 +0000 UTC m=+0.130105955 container died d27ec91250054a9e69fcd95cca10b3c1e53226cb25074bdc24db946115046775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bassi, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:36:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e95685bdb242075575ba975df4558fe04926f88146593ba06025bc54b9b65b9-merged.mount: Deactivated successfully.
Nov 24 18:36:40 compute-0 podman[188062]: 2025-11-24 18:36:40.098400677 +0000 UTC m=+0.174167016 container remove d27ec91250054a9e69fcd95cca10b3c1e53226cb25074bdc24db946115046775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bassi, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 18:36:40 compute-0 systemd[1]: libpod-conmon-d27ec91250054a9e69fcd95cca10b3c1e53226cb25074bdc24db946115046775.scope: Deactivated successfully.
Nov 24 18:36:40 compute-0 podman[188102]: 2025-11-24 18:36:40.263707805 +0000 UTC m=+0.040424313 container create b639d854128d2ab8f8650cc7a39bd18660bc1a88f18692209e73393ab525a1c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_noether, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:36:40 compute-0 systemd[1]: Started libpod-conmon-b639d854128d2ab8f8650cc7a39bd18660bc1a88f18692209e73393ab525a1c7.scope.
Nov 24 18:36:40 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:36:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da8cbf748fd95d33b92bb8a5234b89e02570293e7d2febd601d0ad474444d493/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:36:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da8cbf748fd95d33b92bb8a5234b89e02570293e7d2febd601d0ad474444d493/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:36:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da8cbf748fd95d33b92bb8a5234b89e02570293e7d2febd601d0ad474444d493/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:36:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da8cbf748fd95d33b92bb8a5234b89e02570293e7d2febd601d0ad474444d493/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:36:40 compute-0 podman[188102]: 2025-11-24 18:36:40.24516083 +0000 UTC m=+0.021877328 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:36:40 compute-0 podman[188102]: 2025-11-24 18:36:40.350109126 +0000 UTC m=+0.126825634 container init b639d854128d2ab8f8650cc7a39bd18660bc1a88f18692209e73393ab525a1c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 24 18:36:40 compute-0 podman[188102]: 2025-11-24 18:36:40.358528473 +0000 UTC m=+0.135244951 container start b639d854128d2ab8f8650cc7a39bd18660bc1a88f18692209e73393ab525a1c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_noether, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:36:40 compute-0 podman[188102]: 2025-11-24 18:36:40.362225133 +0000 UTC m=+0.138941631 container attach b639d854128d2ab8f8650cc7a39bd18660bc1a88f18692209e73393ab525a1c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_noether, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:36:41 compute-0 vigorous_noether[188119]: {
Nov 24 18:36:41 compute-0 vigorous_noether[188119]:     "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 18:36:41 compute-0 vigorous_noether[188119]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:36:41 compute-0 vigorous_noether[188119]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 18:36:41 compute-0 vigorous_noether[188119]:         "osd_id": 0,
Nov 24 18:36:41 compute-0 vigorous_noether[188119]:         "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:36:41 compute-0 vigorous_noether[188119]:         "type": "bluestore"
Nov 24 18:36:41 compute-0 vigorous_noether[188119]:     },
Nov 24 18:36:41 compute-0 vigorous_noether[188119]:     "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 18:36:41 compute-0 vigorous_noether[188119]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:36:41 compute-0 vigorous_noether[188119]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 18:36:41 compute-0 vigorous_noether[188119]:         "osd_id": 1,
Nov 24 18:36:41 compute-0 vigorous_noether[188119]:         "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:36:41 compute-0 vigorous_noether[188119]:         "type": "bluestore"
Nov 24 18:36:41 compute-0 vigorous_noether[188119]:     },
Nov 24 18:36:41 compute-0 vigorous_noether[188119]:     "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 18:36:41 compute-0 vigorous_noether[188119]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:36:41 compute-0 vigorous_noether[188119]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 18:36:41 compute-0 vigorous_noether[188119]:         "osd_id": 2,
Nov 24 18:36:41 compute-0 vigorous_noether[188119]:         "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:36:41 compute-0 vigorous_noether[188119]:         "type": "bluestore"
Nov 24 18:36:41 compute-0 vigorous_noether[188119]:     }
Nov 24 18:36:41 compute-0 vigorous_noether[188119]: }
Nov 24 18:36:41 compute-0 systemd[1]: libpod-b639d854128d2ab8f8650cc7a39bd18660bc1a88f18692209e73393ab525a1c7.scope: Deactivated successfully.
Nov 24 18:36:41 compute-0 podman[188102]: 2025-11-24 18:36:41.284420649 +0000 UTC m=+1.061137127 container died b639d854128d2ab8f8650cc7a39bd18660bc1a88f18692209e73393ab525a1c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:36:41 compute-0 ceph-mon[74927]: pgmap v623: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-da8cbf748fd95d33b92bb8a5234b89e02570293e7d2febd601d0ad474444d493-merged.mount: Deactivated successfully.
Nov 24 18:36:41 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v624: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:41 compute-0 podman[188102]: 2025-11-24 18:36:41.331947576 +0000 UTC m=+1.108664054 container remove b639d854128d2ab8f8650cc7a39bd18660bc1a88f18692209e73393ab525a1c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 24 18:36:41 compute-0 systemd[1]: libpod-conmon-b639d854128d2ab8f8650cc7a39bd18660bc1a88f18692209e73393ab525a1c7.scope: Deactivated successfully.
Nov 24 18:36:41 compute-0 sudo[187999]: pam_unix(sudo:session): session closed for user root
Nov 24 18:36:41 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:36:41 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:36:41 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:36:41 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:36:41 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev ffb89280-5c80-45c5-9232-23ab21c10fbf does not exist
Nov 24 18:36:41 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev e9acbac3-f6ec-402a-82a2-032f78e6036d does not exist
Nov 24 18:36:41 compute-0 sudo[188166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:36:41 compute-0 sudo[188166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:36:41 compute-0 sudo[188166]: pam_unix(sudo:session): session closed for user root
Nov 24 18:36:41 compute-0 sudo[188191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:36:41 compute-0 sudo[188191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:36:41 compute-0 sudo[188191]: pam_unix(sudo:session): session closed for user root
Nov 24 18:36:42 compute-0 ceph-mon[74927]: pgmap v624: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:42 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:36:42 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:36:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:36:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:36:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:36:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:36:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:36:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:36:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:36:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:36:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:36:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:36:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:36:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:36:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:36:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 18:36:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:36:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:36:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:36:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 18:36:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:36:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 18:36:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:36:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:36:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:36:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 18:36:43 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v625: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:44 compute-0 ceph-mon[74927]: pgmap v625: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:45 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v626: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:46 compute-0 ceph-mon[74927]: pgmap v626: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:46 compute-0 kernel: SELinux:  Converting 2769 SID table entries...
Nov 24 18:36:46 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 18:36:46 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 24 18:36:46 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 18:36:46 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 24 18:36:46 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 18:36:46 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 18:36:46 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 18:36:47 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v627: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:36:47 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Nov 24 18:36:47 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:36:47.624995) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 18:36:47 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Nov 24 18:36:47 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009407625031, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1856, "num_deletes": 250, "total_data_size": 3130629, "memory_usage": 3178776, "flush_reason": "Manual Compaction"}
Nov 24 18:36:47 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Nov 24 18:36:47 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009407635232, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1769378, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11763, "largest_seqno": 13618, "table_properties": {"data_size": 1763367, "index_size": 3022, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15012, "raw_average_key_size": 20, "raw_value_size": 1750105, "raw_average_value_size": 2342, "num_data_blocks": 141, "num_entries": 747, "num_filter_entries": 747, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764009197, "oldest_key_time": 1764009197, "file_creation_time": 1764009407, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Nov 24 18:36:47 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 10269 microseconds, and 4291 cpu microseconds.
Nov 24 18:36:47 compute-0 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 18:36:47 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:36:47.635269) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1769378 bytes OK
Nov 24 18:36:47 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:36:47.635284) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Nov 24 18:36:47 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:36:47.636784) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Nov 24 18:36:47 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:36:47.636798) EVENT_LOG_v1 {"time_micros": 1764009407636793, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 18:36:47 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:36:47.636814) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 18:36:47 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 3122811, prev total WAL file size 3122811, number of live WAL files 2.
Nov 24 18:36:47 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:36:47 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:36:47.637646) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323532' seq:72057594037927935, type:22 .. '6D67727374617400353033' seq:0, type:0; will stop at (end)
Nov 24 18:36:47 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 18:36:47 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1727KB)], [29(7636KB)]
Nov 24 18:36:47 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009407637685, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9589458, "oldest_snapshot_seqno": -1}
Nov 24 18:36:47 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4048 keys, 7591176 bytes, temperature: kUnknown
Nov 24 18:36:47 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009407670493, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7591176, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7562329, "index_size": 17601, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10181, "raw_key_size": 96270, "raw_average_key_size": 23, "raw_value_size": 7487559, "raw_average_value_size": 1849, "num_data_blocks": 767, "num_entries": 4048, "num_filter_entries": 4048, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008324, "oldest_key_time": 0, "file_creation_time": 1764009407, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Nov 24 18:36:47 compute-0 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 18:36:47 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:36:47.670705) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7591176 bytes
Nov 24 18:36:47 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:36:47.671819) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 291.7 rd, 230.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.5 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(9.7) write-amplify(4.3) OK, records in: 4461, records dropped: 413 output_compression: NoCompression
Nov 24 18:36:47 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:36:47.671835) EVENT_LOG_v1 {"time_micros": 1764009407671826, "job": 12, "event": "compaction_finished", "compaction_time_micros": 32877, "compaction_time_cpu_micros": 15480, "output_level": 6, "num_output_files": 1, "total_output_size": 7591176, "num_input_records": 4461, "num_output_records": 4048, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 18:36:47 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:36:47 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009407672185, "job": 12, "event": "table_file_deletion", "file_number": 31}
Nov 24 18:36:47 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:36:47 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009407673327, "job": 12, "event": "table_file_deletion", "file_number": 29}
Nov 24 18:36:47 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:36:47.637576) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:36:47 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:36:47.673395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:36:47 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:36:47.673400) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:36:47 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:36:47.673402) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:36:47 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:36:47.673403) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:36:47 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:36:47.673404) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:36:47 compute-0 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Nov 24 18:36:48 compute-0 podman[188223]: 2025-11-24 18:36:48.015174476 +0000 UTC m=+0.103469810 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 24 18:36:48 compute-0 ceph-mon[74927]: pgmap v627: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:49 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v628: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:50 compute-0 ceph-mon[74927]: pgmap v628: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:51 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v629: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:51 compute-0 podman[188250]: 2025-11-24 18:36:51.971152202 +0000 UTC m=+0.061498861 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 24 18:36:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:36:52 compute-0 ceph-mon[74927]: pgmap v629: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:53 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v630: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:54 compute-0 ceph-mon[74927]: pgmap v630: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:55 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v631: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:55 compute-0 kernel: SELinux:  Converting 2769 SID table entries...
Nov 24 18:36:55 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 18:36:55 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 24 18:36:55 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 18:36:55 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 24 18:36:55 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 18:36:55 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 18:36:55 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 18:36:56 compute-0 ceph-mon[74927]: pgmap v631: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:57 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v632: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:36:58 compute-0 ceph-mon[74927]: pgmap v632: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:36:59 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v633: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:00 compute-0 ceph-mon[74927]: pgmap v633: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:01 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v634: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:37:03 compute-0 ceph-mon[74927]: pgmap v634: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:03 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v635: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:37:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:37:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:37:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:37:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:37:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:37:05 compute-0 ceph-mon[74927]: pgmap v635: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:05 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v636: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:07 compute-0 ceph-mon[74927]: pgmap v636: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:07 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v637: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:37:09 compute-0 ceph-mon[74927]: pgmap v637: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:09 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v638: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:11 compute-0 ceph-mon[74927]: pgmap v638: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:11 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v639: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:37:13 compute-0 ceph-mon[74927]: pgmap v639: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:13 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v640: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:15 compute-0 ceph-mon[74927]: pgmap v640: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:15 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v641: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:17 compute-0 ceph-mon[74927]: pgmap v641: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:17 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v642: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:37:18 compute-0 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 24 18:37:19 compute-0 podman[196015]: 2025-11-24 18:37:19.015682392 +0000 UTC m=+0.098182181 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Nov 24 18:37:19 compute-0 ceph-mon[74927]: pgmap v642: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:19 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v643: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:21 compute-0 ceph-mon[74927]: pgmap v643: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:21 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v644: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:37:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:37:22.728 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:37:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:37:22.728 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:37:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:37:22.728 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:37:22 compute-0 podman[198473]: 2025-11-24 18:37:22.988951143 +0000 UTC m=+0.079061742 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Nov 24 18:37:23 compute-0 ceph-mon[74927]: pgmap v644: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:23 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v645: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:25 compute-0 ceph-mon[74927]: pgmap v645: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:25 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v646: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:27 compute-0 ceph-mon[74927]: pgmap v646: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:27 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v647: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:37:29 compute-0 ceph-mon[74927]: pgmap v647: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:29 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v648: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:30 compute-0 ceph-mon[74927]: pgmap v648: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:31 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v649: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:32 compute-0 ceph-mon[74927]: pgmap v649: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:32 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:37:33 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v650: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:34 compute-0 ceph-mon[74927]: pgmap v650: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:37:34
Nov 24 18:37:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:37:34 compute-0 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 18:37:34 compute-0 ceph-mgr[75218]: [balancer INFO root] pools ['default.rgw.log', 'images', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'volumes', 'backups']
Nov 24 18:37:34 compute-0 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 18:37:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:37:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:37:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:37:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:37:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:37:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:37:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:37:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:37:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:37:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:37:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:37:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:37:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:37:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:37:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:37:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:37:35 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v651: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:36 compute-0 ceph-mon[74927]: pgmap v651: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:37 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v652: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:37:38 compute-0 ceph-mon[74927]: pgmap v652: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:39 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v653: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:40 compute-0 ceph-mon[74927]: pgmap v653: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:41 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v654: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:41 compute-0 sudo[205119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:37:41 compute-0 sudo[205119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:37:41 compute-0 sudo[205119]: pam_unix(sudo:session): session closed for user root
Nov 24 18:37:41 compute-0 sudo[205144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:37:41 compute-0 sudo[205144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:37:41 compute-0 sudo[205144]: pam_unix(sudo:session): session closed for user root
Nov 24 18:37:41 compute-0 sudo[205169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:37:41 compute-0 sudo[205169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:37:41 compute-0 sudo[205169]: pam_unix(sudo:session): session closed for user root
Nov 24 18:37:41 compute-0 sudo[205194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 18:37:41 compute-0 sudo[205194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:37:42 compute-0 sudo[205194]: pam_unix(sudo:session): session closed for user root
Nov 24 18:37:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:37:42 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:37:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:37:42 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:37:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:37:42 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:37:42 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev d346a571-67bd-4922-a7fd-8305c0d2659e does not exist
Nov 24 18:37:42 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev f71744e7-58ec-4494-aaf1-3eb0a9283834 does not exist
Nov 24 18:37:42 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 9480c747-57b1-4ee9-9a50-486a48e45f37 does not exist
Nov 24 18:37:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 18:37:42 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:37:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 18:37:42 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:37:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:37:42 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:37:42 compute-0 sudo[205249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:37:42 compute-0 sudo[205249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:37:42 compute-0 sudo[205249]: pam_unix(sudo:session): session closed for user root
Nov 24 18:37:42 compute-0 sudo[205274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:37:42 compute-0 sudo[205274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:37:42 compute-0 sudo[205274]: pam_unix(sudo:session): session closed for user root
Nov 24 18:37:42 compute-0 sudo[205299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:37:42 compute-0 sudo[205299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:37:42 compute-0 sudo[205299]: pam_unix(sudo:session): session closed for user root
Nov 24 18:37:42 compute-0 sudo[205324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 18:37:42 compute-0 sudo[205324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:37:42 compute-0 ceph-mon[74927]: pgmap v654: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:42 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:37:42 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:37:42 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:37:42 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:37:42 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:37:42 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:37:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:37:42 compute-0 podman[205389]: 2025-11-24 18:37:42.704005139 +0000 UTC m=+0.034943970 container create ef999f3f438bc0f2bcf405e18b9ea15f2b508bead1cdede2a018c66b978e44b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hamilton, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 24 18:37:42 compute-0 systemd[1]: Started libpod-conmon-ef999f3f438bc0f2bcf405e18b9ea15f2b508bead1cdede2a018c66b978e44b2.scope.
Nov 24 18:37:42 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:37:42 compute-0 podman[205389]: 2025-11-24 18:37:42.687990776 +0000 UTC m=+0.018929637 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:37:42 compute-0 podman[205389]: 2025-11-24 18:37:42.795402005 +0000 UTC m=+0.126340896 container init ef999f3f438bc0f2bcf405e18b9ea15f2b508bead1cdede2a018c66b978e44b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Nov 24 18:37:42 compute-0 podman[205389]: 2025-11-24 18:37:42.803649508 +0000 UTC m=+0.134588369 container start ef999f3f438bc0f2bcf405e18b9ea15f2b508bead1cdede2a018c66b978e44b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:37:42 compute-0 podman[205389]: 2025-11-24 18:37:42.807956454 +0000 UTC m=+0.138895315 container attach ef999f3f438bc0f2bcf405e18b9ea15f2b508bead1cdede2a018c66b978e44b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 18:37:42 compute-0 bold_hamilton[205405]: 167 167
Nov 24 18:37:42 compute-0 systemd[1]: libpod-ef999f3f438bc0f2bcf405e18b9ea15f2b508bead1cdede2a018c66b978e44b2.scope: Deactivated successfully.
Nov 24 18:37:42 compute-0 podman[205389]: 2025-11-24 18:37:42.809534543 +0000 UTC m=+0.140473374 container died ef999f3f438bc0f2bcf405e18b9ea15f2b508bead1cdede2a018c66b978e44b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 18:37:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-72810a9b26150906649f8686cbf6bcf2b141c26a058d7d6f5f8601f27723c13b-merged.mount: Deactivated successfully.
Nov 24 18:37:42 compute-0 podman[205389]: 2025-11-24 18:37:42.84685117 +0000 UTC m=+0.177790001 container remove ef999f3f438bc0f2bcf405e18b9ea15f2b508bead1cdede2a018c66b978e44b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:37:42 compute-0 systemd[1]: libpod-conmon-ef999f3f438bc0f2bcf405e18b9ea15f2b508bead1cdede2a018c66b978e44b2.scope: Deactivated successfully.
Nov 24 18:37:43 compute-0 podman[205430]: 2025-11-24 18:37:43.006705259 +0000 UTC m=+0.039824400 container create 60bc5643dd4161386ab0f81ab1816d11a2841fd2ab891af0e890b62ee108516e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:37:43 compute-0 systemd[1]: Started libpod-conmon-60bc5643dd4161386ab0f81ab1816d11a2841fd2ab891af0e890b62ee108516e.scope.
Nov 24 18:37:43 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:37:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd9b766a7232bb221aa8b72ed1c77a941a6e521913cf3a6784a3bf11479bad9d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:37:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd9b766a7232bb221aa8b72ed1c77a941a6e521913cf3a6784a3bf11479bad9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:37:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd9b766a7232bb221aa8b72ed1c77a941a6e521913cf3a6784a3bf11479bad9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:37:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd9b766a7232bb221aa8b72ed1c77a941a6e521913cf3a6784a3bf11479bad9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:37:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd9b766a7232bb221aa8b72ed1c77a941a6e521913cf3a6784a3bf11479bad9d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:37:43 compute-0 podman[205430]: 2025-11-24 18:37:43.067748039 +0000 UTC m=+0.100867190 container init 60bc5643dd4161386ab0f81ab1816d11a2841fd2ab891af0e890b62ee108516e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:37:43 compute-0 podman[205430]: 2025-11-24 18:37:43.077593831 +0000 UTC m=+0.110712972 container start 60bc5643dd4161386ab0f81ab1816d11a2841fd2ab891af0e890b62ee108516e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 18:37:43 compute-0 podman[205430]: 2025-11-24 18:37:42.988687186 +0000 UTC m=+0.021806347 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:37:43 compute-0 podman[205430]: 2025-11-24 18:37:43.086051339 +0000 UTC m=+0.119170510 container attach 60bc5643dd4161386ab0f81ab1816d11a2841fd2ab891af0e890b62ee108516e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_engelbart, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:37:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:37:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:37:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:37:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:37:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:37:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:37:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:37:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:37:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:37:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:37:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:37:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:37:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 18:37:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:37:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:37:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:37:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 18:37:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:37:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 18:37:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:37:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:37:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:37:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 18:37:43 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v655: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:44 compute-0 practical_engelbart[205447]: --> passed data devices: 0 physical, 3 LVM
Nov 24 18:37:44 compute-0 practical_engelbart[205447]: --> relative data size: 1.0
Nov 24 18:37:44 compute-0 practical_engelbart[205447]: --> All data devices are unavailable
Nov 24 18:37:44 compute-0 systemd[1]: libpod-60bc5643dd4161386ab0f81ab1816d11a2841fd2ab891af0e890b62ee108516e.scope: Deactivated successfully.
Nov 24 18:37:44 compute-0 podman[205430]: 2025-11-24 18:37:44.061330929 +0000 UTC m=+1.094450130 container died 60bc5643dd4161386ab0f81ab1816d11a2841fd2ab891af0e890b62ee108516e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 18:37:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd9b766a7232bb221aa8b72ed1c77a941a6e521913cf3a6784a3bf11479bad9d-merged.mount: Deactivated successfully.
Nov 24 18:37:44 compute-0 podman[205430]: 2025-11-24 18:37:44.141281254 +0000 UTC m=+1.174400405 container remove 60bc5643dd4161386ab0f81ab1816d11a2841fd2ab891af0e890b62ee108516e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_engelbart, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:37:44 compute-0 systemd[1]: libpod-conmon-60bc5643dd4161386ab0f81ab1816d11a2841fd2ab891af0e890b62ee108516e.scope: Deactivated successfully.
Nov 24 18:37:44 compute-0 sudo[205324]: pam_unix(sudo:session): session closed for user root
Nov 24 18:37:44 compute-0 sudo[205490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:37:44 compute-0 sudo[205490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:37:44 compute-0 sudo[205490]: pam_unix(sudo:session): session closed for user root
Nov 24 18:37:44 compute-0 sudo[205515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:37:44 compute-0 sudo[205515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:37:44 compute-0 sudo[205515]: pam_unix(sudo:session): session closed for user root
Nov 24 18:37:44 compute-0 sudo[205540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:37:44 compute-0 sudo[205540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:37:44 compute-0 sudo[205540]: pam_unix(sudo:session): session closed for user root
Nov 24 18:37:44 compute-0 sudo[205565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- lvm list --format json
Nov 24 18:37:44 compute-0 sudo[205565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:37:44 compute-0 ceph-mon[74927]: pgmap v655: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:44 compute-0 podman[205634]: 2025-11-24 18:37:44.834760438 +0000 UTC m=+0.025732213 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:37:45 compute-0 podman[205634]: 2025-11-24 18:37:45.158371392 +0000 UTC m=+0.349343127 container create b7f0d4beec705ef772805d92816e63461dfe53d13b17354a3816c7a45b8313ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 24 18:37:45 compute-0 systemd[1]: Started libpod-conmon-b7f0d4beec705ef772805d92816e63461dfe53d13b17354a3816c7a45b8313ff.scope.
Nov 24 18:37:45 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:37:45 compute-0 podman[205634]: 2025-11-24 18:37:45.308416379 +0000 UTC m=+0.499388194 container init b7f0d4beec705ef772805d92816e63461dfe53d13b17354a3816c7a45b8313ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jennings, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:37:45 compute-0 podman[205634]: 2025-11-24 18:37:45.319762878 +0000 UTC m=+0.510734633 container start b7f0d4beec705ef772805d92816e63461dfe53d13b17354a3816c7a45b8313ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jennings, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:37:45 compute-0 podman[205634]: 2025-11-24 18:37:45.32597489 +0000 UTC m=+0.516946655 container attach b7f0d4beec705ef772805d92816e63461dfe53d13b17354a3816c7a45b8313ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:37:45 compute-0 infallible_jennings[205651]: 167 167
Nov 24 18:37:45 compute-0 systemd[1]: libpod-b7f0d4beec705ef772805d92816e63461dfe53d13b17354a3816c7a45b8313ff.scope: Deactivated successfully.
Nov 24 18:37:45 compute-0 podman[205634]: 2025-11-24 18:37:45.329165849 +0000 UTC m=+0.520137604 container died b7f0d4beec705ef772805d92816e63461dfe53d13b17354a3816c7a45b8313ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:37:45 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v656: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-4dd4e94291aa5e76257be40c7b100171adb8bdf341759975359e7ec4ba9ce6fd-merged.mount: Deactivated successfully.
Nov 24 18:37:45 compute-0 podman[205634]: 2025-11-24 18:37:45.393831598 +0000 UTC m=+0.584803363 container remove b7f0d4beec705ef772805d92816e63461dfe53d13b17354a3816c7a45b8313ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:37:45 compute-0 systemd[1]: libpod-conmon-b7f0d4beec705ef772805d92816e63461dfe53d13b17354a3816c7a45b8313ff.scope: Deactivated successfully.
Nov 24 18:37:45 compute-0 kernel: SELinux:  Converting 2770 SID table entries...
Nov 24 18:37:45 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 18:37:45 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 24 18:37:45 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 18:37:45 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 24 18:37:45 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 18:37:45 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 18:37:45 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 18:37:45 compute-0 podman[205678]: 2025-11-24 18:37:45.63844979 +0000 UTC m=+0.074329978 container create 8f68951e2a8d71b94344147db363721adc298f0771926a4292bc2517d2208ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:37:45 compute-0 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 24 18:37:45 compute-0 podman[205678]: 2025-11-24 18:37:45.603232165 +0000 UTC m=+0.039112413 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:37:45 compute-0 systemd[1]: Started libpod-conmon-8f68951e2a8d71b94344147db363721adc298f0771926a4292bc2517d2208ddb.scope.
Nov 24 18:37:45 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:37:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f45e0f9b3867c044533c40556a124766346a25b86041dd36ca8be245b8d2d9ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:37:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f45e0f9b3867c044533c40556a124766346a25b86041dd36ca8be245b8d2d9ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:37:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f45e0f9b3867c044533c40556a124766346a25b86041dd36ca8be245b8d2d9ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:37:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f45e0f9b3867c044533c40556a124766346a25b86041dd36ca8be245b8d2d9ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:37:45 compute-0 podman[205678]: 2025-11-24 18:37:45.784390967 +0000 UTC m=+0.220271195 container init 8f68951e2a8d71b94344147db363721adc298f0771926a4292bc2517d2208ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldwasser, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 18:37:45 compute-0 podman[205678]: 2025-11-24 18:37:45.800616306 +0000 UTC m=+0.236496494 container start 8f68951e2a8d71b94344147db363721adc298f0771926a4292bc2517d2208ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldwasser, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 18:37:45 compute-0 podman[205678]: 2025-11-24 18:37:45.807245819 +0000 UTC m=+0.243126067 container attach 8f68951e2a8d71b94344147db363721adc298f0771926a4292bc2517d2208ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldwasser, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]: {
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:     "0": [
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:         {
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:             "devices": [
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:                 "/dev/loop3"
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:             ],
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:             "lv_name": "ceph_lv0",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:             "lv_size": "21470642176",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:             "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:             "name": "ceph_lv0",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:             "tags": {
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:                 "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:                 "ceph.cluster_name": "ceph",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:                 "ceph.crush_device_class": "",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:                 "ceph.encrypted": "0",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:                 "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:                 "ceph.osd_id": "0",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:                 "ceph.type": "block",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:                 "ceph.vdo": "0"
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:             },
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:             "type": "block",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:             "vg_name": "ceph_vg0"
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:         }
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:     ],
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:     "1": [
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:         {
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:             "devices": [
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:                 "/dev/loop4"
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:             ],
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:             "lv_name": "ceph_lv1",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:             "lv_size": "21470642176",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:             "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:             "name": "ceph_lv1",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:             "tags": {
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:                 "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:                 "ceph.cluster_name": "ceph",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:                 "ceph.crush_device_class": "",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:                 "ceph.encrypted": "0",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:                 "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:                 "ceph.osd_id": "1",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:                 "ceph.type": "block",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:                 "ceph.vdo": "0"
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:             },
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:             "type": "block",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:             "vg_name": "ceph_vg1"
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:         }
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:     ],
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:     "2": [
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:         {
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:             "devices": [
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:                 "/dev/loop5"
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:             ],
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:             "lv_name": "ceph_lv2",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:             "lv_size": "21470642176",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:             "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:             "name": "ceph_lv2",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:             "tags": {
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:                 "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:                 "ceph.cluster_name": "ceph",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:                 "ceph.crush_device_class": "",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:                 "ceph.encrypted": "0",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:                 "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:                 "ceph.osd_id": "2",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:                 "ceph.type": "block",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:                 "ceph.vdo": "0"
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:             },
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:             "type": "block",
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:             "vg_name": "ceph_vg2"
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:         }
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]:     ]
Nov 24 18:37:46 compute-0 upbeat_goldwasser[205695]: }
Nov 24 18:37:46 compute-0 ceph-mon[74927]: pgmap v656: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:46 compute-0 systemd[1]: libpod-8f68951e2a8d71b94344147db363721adc298f0771926a4292bc2517d2208ddb.scope: Deactivated successfully.
Nov 24 18:37:46 compute-0 podman[205678]: 2025-11-24 18:37:46.585959298 +0000 UTC m=+1.021839446 container died 8f68951e2a8d71b94344147db363721adc298f0771926a4292bc2517d2208ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldwasser, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 18:37:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-f45e0f9b3867c044533c40556a124766346a25b86041dd36ca8be245b8d2d9ab-merged.mount: Deactivated successfully.
Nov 24 18:37:46 compute-0 podman[205678]: 2025-11-24 18:37:46.65115366 +0000 UTC m=+1.087033808 container remove 8f68951e2a8d71b94344147db363721adc298f0771926a4292bc2517d2208ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldwasser, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 18:37:46 compute-0 systemd[1]: libpod-conmon-8f68951e2a8d71b94344147db363721adc298f0771926a4292bc2517d2208ddb.scope: Deactivated successfully.
Nov 24 18:37:46 compute-0 sudo[205565]: pam_unix(sudo:session): session closed for user root
Nov 24 18:37:46 compute-0 groupadd[205721]: group added to /etc/group: name=dnsmasq, GID=991
Nov 24 18:37:46 compute-0 groupadd[205721]: group added to /etc/gshadow: name=dnsmasq
Nov 24 18:37:46 compute-0 groupadd[205721]: new group: name=dnsmasq, GID=991
Nov 24 18:37:46 compute-0 useradd[205750]: new user: name=dnsmasq, UID=991, GID=991, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Nov 24 18:37:46 compute-0 sudo[205720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:37:46 compute-0 sudo[205720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:37:46 compute-0 sudo[205720]: pam_unix(sudo:session): session closed for user root
Nov 24 18:37:46 compute-0 dbus-broker-launch[812]: Noticed file-system modification, trigger reload.
Nov 24 18:37:46 compute-0 dbus-broker-launch[812]: Noticed file-system modification, trigger reload.
Nov 24 18:37:46 compute-0 sudo[205758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:37:46 compute-0 sudo[205758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:37:46 compute-0 sudo[205758]: pam_unix(sudo:session): session closed for user root
Nov 24 18:37:46 compute-0 sudo[205787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:37:46 compute-0 sudo[205787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:37:46 compute-0 sudo[205787]: pam_unix(sudo:session): session closed for user root
Nov 24 18:37:46 compute-0 sudo[205812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- raw list --format json
Nov 24 18:37:47 compute-0 sudo[205812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:37:47 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v657: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:47 compute-0 podman[205878]: 2025-11-24 18:37:47.377301867 +0000 UTC m=+0.044834603 container create 84e47540d65121285b925a6d17fb5a217fe1a5fb613083dac775fe75ed7ca848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 18:37:47 compute-0 systemd[1]: Started libpod-conmon-84e47540d65121285b925a6d17fb5a217fe1a5fb613083dac775fe75ed7ca848.scope.
Nov 24 18:37:47 compute-0 podman[205878]: 2025-11-24 18:37:47.354735602 +0000 UTC m=+0.022268348 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:37:47 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:37:47 compute-0 podman[205878]: 2025-11-24 18:37:47.489845143 +0000 UTC m=+0.157377859 container init 84e47540d65121285b925a6d17fb5a217fe1a5fb613083dac775fe75ed7ca848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hamilton, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 24 18:37:47 compute-0 podman[205878]: 2025-11-24 18:37:47.500593847 +0000 UTC m=+0.168126543 container start 84e47540d65121285b925a6d17fb5a217fe1a5fb613083dac775fe75ed7ca848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hamilton, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:37:47 compute-0 podman[205878]: 2025-11-24 18:37:47.504301678 +0000 UTC m=+0.171834374 container attach 84e47540d65121285b925a6d17fb5a217fe1a5fb613083dac775fe75ed7ca848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hamilton, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:37:47 compute-0 keen_hamilton[205894]: 167 167
Nov 24 18:37:47 compute-0 systemd[1]: libpod-84e47540d65121285b925a6d17fb5a217fe1a5fb613083dac775fe75ed7ca848.scope: Deactivated successfully.
Nov 24 18:37:47 compute-0 podman[205878]: 2025-11-24 18:37:47.509980708 +0000 UTC m=+0.177513404 container died 84e47540d65121285b925a6d17fb5a217fe1a5fb613083dac775fe75ed7ca848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 24 18:37:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-ffbfcd4c775ed4d3756cfcda6f4531bd165c8b0e082f89b7ab33a04da140a415-merged.mount: Deactivated successfully.
Nov 24 18:37:47 compute-0 podman[205878]: 2025-11-24 18:37:47.54832311 +0000 UTC m=+0.215855806 container remove 84e47540d65121285b925a6d17fb5a217fe1a5fb613083dac775fe75ed7ca848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hamilton, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:37:47 compute-0 systemd[1]: libpod-conmon-84e47540d65121285b925a6d17fb5a217fe1a5fb613083dac775fe75ed7ca848.scope: Deactivated successfully.
Nov 24 18:37:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:37:47 compute-0 groupadd[205932]: group added to /etc/group: name=clevis, GID=990
Nov 24 18:37:47 compute-0 groupadd[205932]: group added to /etc/gshadow: name=clevis
Nov 24 18:37:47 compute-0 podman[205918]: 2025-11-24 18:37:47.790353579 +0000 UTC m=+0.079934816 container create 744a5be3af8fd9b3485b2b314b5121b98c828d383ace69c61a8a7e722b4ac298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_williams, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:37:47 compute-0 groupadd[205932]: new group: name=clevis, GID=990
Nov 24 18:37:47 compute-0 podman[205918]: 2025-11-24 18:37:47.74931716 +0000 UTC m=+0.038898467 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:37:47 compute-0 systemd[1]: Started libpod-conmon-744a5be3af8fd9b3485b2b314b5121b98c828d383ace69c61a8a7e722b4ac298.scope.
Nov 24 18:37:47 compute-0 useradd[205941]: new user: name=clevis, UID=990, GID=990, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Nov 24 18:37:47 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:37:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bd705771a980f4689033c54e71d2a3b95940f38b992feb2324a55b9a5aaf35e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:37:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bd705771a980f4689033c54e71d2a3b95940f38b992feb2324a55b9a5aaf35e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:37:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bd705771a980f4689033c54e71d2a3b95940f38b992feb2324a55b9a5aaf35e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:37:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bd705771a980f4689033c54e71d2a3b95940f38b992feb2324a55b9a5aaf35e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:37:47 compute-0 podman[205918]: 2025-11-24 18:37:47.936334757 +0000 UTC m=+0.225916014 container init 744a5be3af8fd9b3485b2b314b5121b98c828d383ace69c61a8a7e722b4ac298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_williams, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:37:47 compute-0 podman[205918]: 2025-11-24 18:37:47.953247792 +0000 UTC m=+0.242829019 container start 744a5be3af8fd9b3485b2b314b5121b98c828d383ace69c61a8a7e722b4ac298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_williams, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:37:47 compute-0 podman[205918]: 2025-11-24 18:37:47.956336998 +0000 UTC m=+0.245918305 container attach 744a5be3af8fd9b3485b2b314b5121b98c828d383ace69c61a8a7e722b4ac298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_williams, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Nov 24 18:37:47 compute-0 usermod[205958]: add 'clevis' to group 'tss'
Nov 24 18:37:47 compute-0 usermod[205958]: add 'clevis' to shadow group 'tss'
Nov 24 18:37:48 compute-0 ceph-mon[74927]: pgmap v657: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:48 compute-0 eager_williams[205945]: {
Nov 24 18:37:48 compute-0 eager_williams[205945]:     "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 18:37:48 compute-0 eager_williams[205945]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:37:48 compute-0 eager_williams[205945]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 18:37:48 compute-0 eager_williams[205945]:         "osd_id": 0,
Nov 24 18:37:48 compute-0 eager_williams[205945]:         "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:37:48 compute-0 eager_williams[205945]:         "type": "bluestore"
Nov 24 18:37:48 compute-0 eager_williams[205945]:     },
Nov 24 18:37:48 compute-0 eager_williams[205945]:     "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 18:37:48 compute-0 eager_williams[205945]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:37:48 compute-0 eager_williams[205945]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 18:37:48 compute-0 eager_williams[205945]:         "osd_id": 1,
Nov 24 18:37:48 compute-0 eager_williams[205945]:         "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:37:48 compute-0 eager_williams[205945]:         "type": "bluestore"
Nov 24 18:37:48 compute-0 eager_williams[205945]:     },
Nov 24 18:37:48 compute-0 eager_williams[205945]:     "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 18:37:48 compute-0 eager_williams[205945]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:37:48 compute-0 eager_williams[205945]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 18:37:48 compute-0 eager_williams[205945]:         "osd_id": 2,
Nov 24 18:37:48 compute-0 eager_williams[205945]:         "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:37:48 compute-0 eager_williams[205945]:         "type": "bluestore"
Nov 24 18:37:48 compute-0 eager_williams[205945]:     }
Nov 24 18:37:48 compute-0 eager_williams[205945]: }
Nov 24 18:37:48 compute-0 systemd[1]: libpod-744a5be3af8fd9b3485b2b314b5121b98c828d383ace69c61a8a7e722b4ac298.scope: Deactivated successfully.
Nov 24 18:37:48 compute-0 podman[205918]: 2025-11-24 18:37:48.952972052 +0000 UTC m=+1.242553279 container died 744a5be3af8fd9b3485b2b314b5121b98c828d383ace69c61a8a7e722b4ac298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_williams, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 24 18:37:48 compute-0 systemd[1]: libpod-744a5be3af8fd9b3485b2b314b5121b98c828d383ace69c61a8a7e722b4ac298.scope: Consumed 1.003s CPU time.
Nov 24 18:37:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bd705771a980f4689033c54e71d2a3b95940f38b992feb2324a55b9a5aaf35e-merged.mount: Deactivated successfully.
Nov 24 18:37:49 compute-0 podman[205918]: 2025-11-24 18:37:49.023319241 +0000 UTC m=+1.312900478 container remove 744a5be3af8fd9b3485b2b314b5121b98c828d383ace69c61a8a7e722b4ac298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 24 18:37:49 compute-0 systemd[1]: libpod-conmon-744a5be3af8fd9b3485b2b314b5121b98c828d383ace69c61a8a7e722b4ac298.scope: Deactivated successfully.
Nov 24 18:37:49 compute-0 sudo[205812]: pam_unix(sudo:session): session closed for user root
Nov 24 18:37:49 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:37:49 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:37:49 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:37:49 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:37:49 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 21b44f78-fda7-4419-8831-ce4444ca722e does not exist
Nov 24 18:37:49 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 50d1240f-401e-4848-a114-ae4bf5eb5e75 does not exist
Nov 24 18:37:49 compute-0 sudo[206034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:37:49 compute-0 sudo[206034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:37:49 compute-0 sudo[206034]: pam_unix(sudo:session): session closed for user root
Nov 24 18:37:49 compute-0 podman[206023]: 2025-11-24 18:37:49.151640435 +0000 UTC m=+0.092045033 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller)
Nov 24 18:37:49 compute-0 sudo[206073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:37:49 compute-0 sudo[206073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:37:49 compute-0 sudo[206073]: pam_unix(sudo:session): session closed for user root
Nov 24 18:37:49 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v658: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:50 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:37:50 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:37:50 compute-0 polkitd[43339]: Reloading rules
Nov 24 18:37:50 compute-0 polkitd[43339]: Collecting garbage unconditionally...
Nov 24 18:37:50 compute-0 polkitd[43339]: Loading rules from directory /etc/polkit-1/rules.d
Nov 24 18:37:50 compute-0 polkitd[43339]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 24 18:37:50 compute-0 polkitd[43339]: Finished loading, compiling and executing 3 rules
Nov 24 18:37:50 compute-0 polkitd[43339]: Reloading rules
Nov 24 18:37:50 compute-0 polkitd[43339]: Collecting garbage unconditionally...
Nov 24 18:37:50 compute-0 polkitd[43339]: Loading rules from directory /etc/polkit-1/rules.d
Nov 24 18:37:50 compute-0 polkitd[43339]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 24 18:37:50 compute-0 polkitd[43339]: Finished loading, compiling and executing 3 rules
Nov 24 18:37:51 compute-0 ceph-mon[74927]: pgmap v658: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:51 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v659: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:51 compute-0 groupadd[206265]: group added to /etc/group: name=ceph, GID=167
Nov 24 18:37:51 compute-0 groupadd[206265]: group added to /etc/gshadow: name=ceph
Nov 24 18:37:51 compute-0 groupadd[206265]: new group: name=ceph, GID=167
Nov 24 18:37:51 compute-0 useradd[206271]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Nov 24 18:37:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:37:53 compute-0 ceph-mon[74927]: pgmap v659: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:53 compute-0 podman[206280]: 2025-11-24 18:37:53.302599445 +0000 UTC m=+0.056164902 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:37:53 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v660: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:54 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Nov 24 18:37:54 compute-0 sshd[1009]: Received signal 15; terminating.
Nov 24 18:37:54 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Nov 24 18:37:54 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Nov 24 18:37:54 compute-0 systemd[1]: sshd.service: Consumed 2.668s CPU time, read 32.0K from disk, written 12.0K to disk.
Nov 24 18:37:54 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Nov 24 18:37:54 compute-0 systemd[1]: Stopping sshd-keygen.target...
Nov 24 18:37:54 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 18:37:54 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 18:37:54 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 18:37:54 compute-0 systemd[1]: Reached target sshd-keygen.target.
Nov 24 18:37:54 compute-0 systemd[1]: Starting OpenSSH server daemon...
Nov 24 18:37:54 compute-0 sshd[206915]: Server listening on 0.0.0.0 port 22.
Nov 24 18:37:54 compute-0 sshd[206915]: Server listening on :: port 22.
Nov 24 18:37:54 compute-0 systemd[1]: Started OpenSSH server daemon.
Nov 24 18:37:55 compute-0 ceph-mon[74927]: pgmap v660: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:55 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v661: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:56 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 18:37:56 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 24 18:37:56 compute-0 systemd[1]: Reloading.
Nov 24 18:37:56 compute-0 systemd-rc-local-generator[207176]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:37:56 compute-0 systemd-sysv-generator[207181]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:37:56 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 18:37:57 compute-0 ceph-mon[74927]: pgmap v661: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:57 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v662: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:37:59 compute-0 ceph-mon[74927]: pgmap v662: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:59 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v663: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:37:59 compute-0 sudo[187056]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:00 compute-0 sudo[211325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdvhhvllvyedezosbnfhxjywjlydbbvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009479.7769287-336-44278846699132/AnsiballZ_systemd.py'
Nov 24 18:38:00 compute-0 sudo[211325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:00 compute-0 python3.9[211353]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 18:38:00 compute-0 systemd[1]: Reloading.
Nov 24 18:38:00 compute-0 systemd-sysv-generator[211813]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:38:00 compute-0 systemd-rc-local-generator[211807]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:38:01 compute-0 ceph-mon[74927]: pgmap v663: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:01 compute-0 sudo[211325]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:01 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v664: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:01 compute-0 sudo[212664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qylfnsnqhhgpbedbziojlhxhkndbtrkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009481.3256867-336-274082421927200/AnsiballZ_systemd.py'
Nov 24 18:38:01 compute-0 sudo[212664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:01 compute-0 python3.9[212689]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 18:38:02 compute-0 systemd[1]: Reloading.
Nov 24 18:38:02 compute-0 systemd-rc-local-generator[213257]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:38:02 compute-0 systemd-sysv-generator[213262]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:38:02 compute-0 sudo[212664]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:38:02 compute-0 sudo[214199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdoblgnoyrkmyektmwvzgguhspsbokhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009482.4851992-336-156845644355033/AnsiballZ_systemd.py'
Nov 24 18:38:02 compute-0 sudo[214199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:03 compute-0 python3.9[214221]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 18:38:03 compute-0 systemd[1]: Reloading.
Nov 24 18:38:03 compute-0 ceph-mon[74927]: pgmap v664: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:03 compute-0 systemd-rc-local-generator[214717]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:38:03 compute-0 systemd-sysv-generator[214722]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:38:03 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v665: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:03 compute-0 sudo[214199]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:03 compute-0 sudo[215589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmjhuyoliehpomnovaiwathcrqtywdze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009483.6001284-336-63197489815081/AnsiballZ_systemd.py'
Nov 24 18:38:03 compute-0 sudo[215589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:04 compute-0 python3.9[215612]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 18:38:04 compute-0 systemd[1]: Reloading.
Nov 24 18:38:04 compute-0 systemd-rc-local-generator[216085]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:38:04 compute-0 systemd-sysv-generator[216090]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:38:04 compute-0 sudo[215589]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:38:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:38:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:38:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:38:04 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 18:38:04 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 24 18:38:04 compute-0 systemd[1]: man-db-cache-update.service: Consumed 10.344s CPU time.
Nov 24 18:38:04 compute-0 systemd[1]: run-r4471f54cb0244fd18094487f21223860.service: Deactivated successfully.
Nov 24 18:38:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:38:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:38:05 compute-0 sudo[216456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiowekvtpecjfhtkefhalxqtqnrppkug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009484.7605553-365-115371280101975/AnsiballZ_systemd.py'
Nov 24 18:38:05 compute-0 sudo[216456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:05 compute-0 ceph-mon[74927]: pgmap v665: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:05 compute-0 python3.9[216458]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 18:38:05 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v666: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:05 compute-0 systemd[1]: Reloading.
Nov 24 18:38:05 compute-0 systemd-rc-local-generator[216483]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:38:05 compute-0 systemd-sysv-generator[216488]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:38:05 compute-0 sudo[216456]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:06 compute-0 sudo[216646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loucjsyhgidlmczdoaecxrkjbmtjnltm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009485.9304771-365-96567098716004/AnsiballZ_systemd.py'
Nov 24 18:38:06 compute-0 sudo[216646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:06 compute-0 python3.9[216648]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 18:38:06 compute-0 systemd[1]: Reloading.
Nov 24 18:38:06 compute-0 systemd-rc-local-generator[216679]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:38:06 compute-0 systemd-sysv-generator[216683]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:38:07 compute-0 sudo[216646]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:07 compute-0 ceph-mon[74927]: pgmap v666: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:07 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v667: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:07 compute-0 sudo[216837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iebgovfjsjdigvfgwfwqrhdvjbljtwnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009487.322269-365-211221592898298/AnsiballZ_systemd.py'
Nov 24 18:38:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:38:07 compute-0 sudo[216837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:07 compute-0 python3.9[216839]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 18:38:08 compute-0 systemd[1]: Reloading.
Nov 24 18:38:08 compute-0 systemd-rc-local-generator[216870]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:38:08 compute-0 systemd-sysv-generator[216873]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:38:08 compute-0 sudo[216837]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:08 compute-0 sudo[217027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzeoozjbnavzkgyfivgwwqnotdcnnsdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009488.6144302-365-549999902564/AnsiballZ_systemd.py'
Nov 24 18:38:08 compute-0 sudo[217027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:09 compute-0 ceph-mon[74927]: pgmap v667: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:09 compute-0 python3.9[217029]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 18:38:09 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v668: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:09 compute-0 sudo[217027]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:09 compute-0 sudo[217182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbjknmaxjzmhlyojcbibuoeposjqvowp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009489.489483-365-16852972331113/AnsiballZ_systemd.py'
Nov 24 18:38:09 compute-0 sudo[217182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:10 compute-0 python3.9[217184]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 18:38:10 compute-0 systemd[1]: Reloading.
Nov 24 18:38:10 compute-0 systemd-sysv-generator[217215]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:38:10 compute-0 systemd-rc-local-generator[217211]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:38:10 compute-0 sudo[217182]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:11 compute-0 sudo[217372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lumgcyycfwmyztnygmgucfehilvglpmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009490.768787-401-142243444787963/AnsiballZ_systemd.py'
Nov 24 18:38:11 compute-0 sudo[217372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:11 compute-0 ceph-mon[74927]: pgmap v668: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:11 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v669: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:11 compute-0 python3.9[217374]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 18:38:11 compute-0 systemd[1]: Reloading.
Nov 24 18:38:11 compute-0 systemd-rc-local-generator[217406]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:38:11 compute-0 systemd-sysv-generator[217410]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:38:11 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 24 18:38:11 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 24 18:38:11 compute-0 sudo[217372]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:12 compute-0 sudo[217565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wehfjnxllrhtnfoecugqatwxoyyspsyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009492.0235796-409-4734639260126/AnsiballZ_systemd.py'
Nov 24 18:38:12 compute-0 sudo[217565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:38:12 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Nov 24 18:38:12 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:38:12.645940) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 18:38:12 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Nov 24 18:38:12 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009492645962, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1141, "num_deletes": 506, "total_data_size": 1243822, "memory_usage": 1276688, "flush_reason": "Manual Compaction"}
Nov 24 18:38:12 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Nov 24 18:38:12 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009492653169, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1232033, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13619, "largest_seqno": 14759, "table_properties": {"data_size": 1226918, "index_size": 2127, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 13285, "raw_average_key_size": 17, "raw_value_size": 1214807, "raw_average_value_size": 1624, "num_data_blocks": 97, "num_entries": 748, "num_filter_entries": 748, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764009408, "oldest_key_time": 1764009408, "file_creation_time": 1764009492, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Nov 24 18:38:12 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 7261 microseconds, and 3337 cpu microseconds.
Nov 24 18:38:12 compute-0 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 18:38:12 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:38:12.653200) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1232033 bytes OK
Nov 24 18:38:12 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:38:12.653214) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Nov 24 18:38:12 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:38:12.654587) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Nov 24 18:38:12 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:38:12.654635) EVENT_LOG_v1 {"time_micros": 1764009492654624, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 18:38:12 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:38:12.654659) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 18:38:12 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1237489, prev total WAL file size 1237489, number of live WAL files 2.
Nov 24 18:38:12 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:38:12 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:38:12.655354) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Nov 24 18:38:12 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 18:38:12 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1203KB)], [32(7413KB)]
Nov 24 18:38:12 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009492655387, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 8823209, "oldest_snapshot_seqno": -1}
Nov 24 18:38:12 compute-0 python3.9[217567]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 18:38:12 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3771 keys, 6894971 bytes, temperature: kUnknown
Nov 24 18:38:12 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009492692639, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 6894971, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6868308, "index_size": 16122, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9477, "raw_key_size": 92610, "raw_average_key_size": 24, "raw_value_size": 6798551, "raw_average_value_size": 1802, "num_data_blocks": 683, "num_entries": 3771, "num_filter_entries": 3771, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008324, "oldest_key_time": 0, "file_creation_time": 1764009492, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 24 18:38:12 compute-0 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 18:38:12 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:38:12.692822) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 6894971 bytes
Nov 24 18:38:12 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:38:12.693984) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 236.4 rd, 184.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 7.2 +0.0 blob) out(6.6 +0.0 blob), read-write-amplify(12.8) write-amplify(5.6) OK, records in: 4796, records dropped: 1025 output_compression: NoCompression
Nov 24 18:38:12 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:38:12.693997) EVENT_LOG_v1 {"time_micros": 1764009492693991, "job": 14, "event": "compaction_finished", "compaction_time_micros": 37329, "compaction_time_cpu_micros": 20181, "output_level": 6, "num_output_files": 1, "total_output_size": 6894971, "num_input_records": 4796, "num_output_records": 3771, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 18:38:12 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:38:12 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009492694245, "job": 14, "event": "table_file_deletion", "file_number": 34}
Nov 24 18:38:12 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:38:12 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009492695305, "job": 14, "event": "table_file_deletion", "file_number": 32}
Nov 24 18:38:12 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:38:12.655283) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:38:12 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:38:12.695365) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:38:12 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:38:12.695371) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:38:12 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:38:12.695375) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:38:12 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:38:12.695377) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:38:12 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:38:12.695378) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:38:12 compute-0 sudo[217565]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:13 compute-0 ceph-mon[74927]: pgmap v669: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:13 compute-0 sudo[217720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kufdvkahwhnabfgxiajobqghbzthsdvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009492.956245-409-210621573842952/AnsiballZ_systemd.py'
Nov 24 18:38:13 compute-0 sudo[217720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:13 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v670: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:13 compute-0 python3.9[217722]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 18:38:13 compute-0 sudo[217720]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:14 compute-0 sudo[217875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xniouoaoxrytptmtjklbewzcxfwikjiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009493.821888-409-206831945612903/AnsiballZ_systemd.py'
Nov 24 18:38:14 compute-0 sudo[217875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:14 compute-0 python3.9[217877]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 18:38:14 compute-0 sudo[217875]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:15 compute-0 sudo[218030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqhwacyyussugofaezfgexwmjezdtmyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009494.7438455-409-243153155482271/AnsiballZ_systemd.py'
Nov 24 18:38:15 compute-0 sudo[218030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:15 compute-0 ceph-mon[74927]: pgmap v670: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:15 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v671: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:15 compute-0 python3.9[218032]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 18:38:15 compute-0 sudo[218030]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:16 compute-0 sudo[218185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnittnzhuovtxwcvqonsdsaonytmbkzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009495.7315214-409-140775109896922/AnsiballZ_systemd.py'
Nov 24 18:38:16 compute-0 sudo[218185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:16 compute-0 python3.9[218187]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 18:38:16 compute-0 sudo[218185]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:17 compute-0 sudo[218340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzpzjmdbantehovclaxxumltaemcirfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009496.6924393-409-280005058699337/AnsiballZ_systemd.py'
Nov 24 18:38:17 compute-0 sudo[218340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:17 compute-0 ceph-mon[74927]: pgmap v671: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:17 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v672: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:17 compute-0 python3.9[218342]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 18:38:17 compute-0 sudo[218340]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:38:17 compute-0 sudo[218495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azhbuijrrvzcmxxdwfyrkracnneswxrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009497.5665674-409-189586073966306/AnsiballZ_systemd.py'
Nov 24 18:38:17 compute-0 sudo[218495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:18 compute-0 python3.9[218497]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 18:38:18 compute-0 sudo[218495]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:18 compute-0 sudo[218650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdufiebhezdjnrauzplnkzroewbvgjin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009498.3290954-409-137802758828899/AnsiballZ_systemd.py'
Nov 24 18:38:18 compute-0 sudo[218650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:18 compute-0 python3.9[218652]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 18:38:19 compute-0 ceph-mon[74927]: pgmap v672: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:19 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v673: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:19 compute-0 sudo[218650]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:19 compute-0 podman[218656]: 2025-11-24 18:38:19.993471596 +0000 UTC m=+0.091179632 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Nov 24 18:38:20 compute-0 sudo[218831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtjijzfuagpuhyarzntnqedkkixfnfan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009500.1224499-409-37941705451405/AnsiballZ_systemd.py'
Nov 24 18:38:20 compute-0 sudo[218831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:20 compute-0 python3.9[218833]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 18:38:20 compute-0 sudo[218831]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:21 compute-0 ceph-mon[74927]: pgmap v673: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:21 compute-0 sudo[218986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyuqafyhrxdnncjkaadqthzivolnfqtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009500.9902227-409-278489089337476/AnsiballZ_systemd.py'
Nov 24 18:38:21 compute-0 sudo[218986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:21 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v674: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:21 compute-0 python3.9[218988]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 18:38:21 compute-0 sudo[218986]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:22 compute-0 sudo[219141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgzlmrtizcfyirbdpltvefdyaungtokx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009501.7607017-409-210446894430628/AnsiballZ_systemd.py'
Nov 24 18:38:22 compute-0 sudo[219141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:22 compute-0 python3.9[219143]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 18:38:22 compute-0 sudo[219141]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:38:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:38:22.729 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:38:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:38:22.729 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:38:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:38:22.729 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:38:22 compute-0 sudo[219296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loqujffjqzzcxxnjwpndijnmvmlkxrwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009502.647555-409-92720439347439/AnsiballZ_systemd.py'
Nov 24 18:38:22 compute-0 sudo[219296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:23 compute-0 python3.9[219298]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 18:38:23 compute-0 sudo[219296]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:23 compute-0 ceph-mon[74927]: pgmap v674: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:23 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v675: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:23 compute-0 sudo[219461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqguncjqrcgshvinuvipumxhwzhtminx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009503.3716156-409-84116033077203/AnsiballZ_systemd.py'
Nov 24 18:38:23 compute-0 sudo[219461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:23 compute-0 podman[219425]: 2025-11-24 18:38:23.74911549 +0000 UTC m=+0.056001277 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 18:38:24 compute-0 python3.9[219470]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 18:38:24 compute-0 sudo[219461]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:24 compute-0 sudo[219627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmppxxndwnqgjncacvsysnuclqttjfji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009504.2358344-409-101687826036631/AnsiballZ_systemd.py'
Nov 24 18:38:24 compute-0 sudo[219627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:24 compute-0 python3.9[219629]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 18:38:24 compute-0 sudo[219627]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:25 compute-0 ceph-mon[74927]: pgmap v675: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:25 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v676: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:25 compute-0 sudo[219782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wevzwlfmznvdajhuhvlechsxgsalfntn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009505.3428142-511-221918807395607/AnsiballZ_file.py'
Nov 24 18:38:25 compute-0 sudo[219782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:25 compute-0 python3.9[219784]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:38:25 compute-0 sudo[219782]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:26 compute-0 sudo[219934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rskzvlhtufgfdxolmrlcgidxgntrjkol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009506.0650687-511-109097939733761/AnsiballZ_file.py'
Nov 24 18:38:26 compute-0 sudo[219934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:26 compute-0 python3.9[219936]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:38:26 compute-0 sudo[219934]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:27 compute-0 sudo[220086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvexsmhjhuvlgokvlwqnjmkkwmurbodu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009506.7255683-511-245757731934499/AnsiballZ_file.py'
Nov 24 18:38:27 compute-0 sudo[220086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:27 compute-0 python3.9[220088]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:38:27 compute-0 sudo[220086]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:27 compute-0 ceph-mon[74927]: pgmap v676: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:27 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v677: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:38:27 compute-0 sudo[220238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxgwniyiteriuxrzuqyllohsvgkqvyzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009507.4181314-511-152184652480632/AnsiballZ_file.py'
Nov 24 18:38:27 compute-0 sudo[220238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:27 compute-0 python3.9[220240]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:38:27 compute-0 sudo[220238]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:28 compute-0 sudo[220390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmpnvxporyrkndiurcctikozbpypnehp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009508.074608-511-69279164639287/AnsiballZ_file.py'
Nov 24 18:38:28 compute-0 sudo[220390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:28 compute-0 python3.9[220392]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:38:28 compute-0 sudo[220390]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:29 compute-0 sudo[220542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiqqgsccrcuczepubgumhmkftlozkmvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009508.7020123-511-112734220383104/AnsiballZ_file.py'
Nov 24 18:38:29 compute-0 sudo[220542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:29 compute-0 python3.9[220544]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:38:29 compute-0 sudo[220542]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:29 compute-0 ceph-mon[74927]: pgmap v677: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:29 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v678: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:29 compute-0 sudo[220694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzrdbxcnswqdmrghgyqzrlvqfgmknrjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009509.433578-554-124272828743679/AnsiballZ_stat.py'
Nov 24 18:38:29 compute-0 sudo[220694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:30 compute-0 python3.9[220696]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:38:30 compute-0 sudo[220694]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:30 compute-0 sudo[220819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynbrvpgkceenomyqeqgynzjvmadmajtk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009509.433578-554-124272828743679/AnsiballZ_copy.py'
Nov 24 18:38:30 compute-0 sudo[220819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:30 compute-0 python3.9[220821]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764009509.433578-554-124272828743679/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:38:30 compute-0 sudo[220819]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:31 compute-0 ceph-mon[74927]: pgmap v678: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:31 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v679: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:31 compute-0 sudo[220971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgucimtudirixvpexdtbzfqqcaoxyiun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009511.111376-554-207048086746111/AnsiballZ_stat.py'
Nov 24 18:38:31 compute-0 sudo[220971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:31 compute-0 python3.9[220973]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:38:31 compute-0 sudo[220971]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:32 compute-0 sudo[221096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjfxgqlqdgnkbwtbwcuqxctnwjtjcuuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009511.111376-554-207048086746111/AnsiballZ_copy.py'
Nov 24 18:38:32 compute-0 sudo[221096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:32 compute-0 python3.9[221098]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764009511.111376-554-207048086746111/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:38:32 compute-0 sudo[221096]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:32 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:38:32 compute-0 sudo[221248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yacipgiatzujpadezmhinimnkzednjqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009512.3802423-554-187540077231187/AnsiballZ_stat.py'
Nov 24 18:38:32 compute-0 sudo[221248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:32 compute-0 python3.9[221250]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:38:32 compute-0 sudo[221248]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:33 compute-0 sudo[221373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fplhwavngwsgvylwbkiscmbwbgwydeqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009512.3802423-554-187540077231187/AnsiballZ_copy.py'
Nov 24 18:38:33 compute-0 sudo[221373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:33 compute-0 ceph-mon[74927]: pgmap v679: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:33 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v680: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:33 compute-0 python3.9[221375]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764009512.3802423-554-187540077231187/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:38:33 compute-0 sudo[221373]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:34 compute-0 sudo[221525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wosdpvmayxiryecozamyzxvbrvpifzzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009513.716708-554-191935304423797/AnsiballZ_stat.py'
Nov 24 18:38:34 compute-0 sudo[221525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:34 compute-0 ceph-mon[74927]: pgmap v680: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:34 compute-0 python3.9[221527]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:38:34 compute-0 sudo[221525]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:38:34
Nov 24 18:38:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:38:34 compute-0 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 18:38:34 compute-0 ceph-mgr[75218]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'default.rgw.meta', 'vms', '.rgw.root', '.mgr']
Nov 24 18:38:34 compute-0 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 18:38:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:38:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:38:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:38:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:38:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:38:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:38:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:38:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:38:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:38:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:38:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:38:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:38:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:38:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:38:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:38:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:38:34 compute-0 sudo[221650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utkmgglkfapuqxhzheyqvpcadcdrwyhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009513.716708-554-191935304423797/AnsiballZ_copy.py'
Nov 24 18:38:34 compute-0 sudo[221650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:35 compute-0 python3.9[221652]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764009513.716708-554-191935304423797/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:38:35 compute-0 sudo[221650]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:35 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v681: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:35 compute-0 sudo[221802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpsqukcgwuwrwoqptwxvlnplgsjbgkim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009515.1874418-554-141367386498368/AnsiballZ_stat.py'
Nov 24 18:38:35 compute-0 sudo[221802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:35 compute-0 python3.9[221804]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:38:35 compute-0 sudo[221802]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:36 compute-0 sudo[221927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-liqjqnzxslrnmszuywakpwsulpqbnaqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009515.1874418-554-141367386498368/AnsiballZ_copy.py'
Nov 24 18:38:36 compute-0 sudo[221927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:36 compute-0 python3.9[221929]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764009515.1874418-554-141367386498368/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:38:36 compute-0 sudo[221927]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:36 compute-0 ceph-mon[74927]: pgmap v681: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:36 compute-0 sudo[222079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chwkxgjlpynhzxtklfmdnwhmjexofvjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009516.4135208-554-185674984620875/AnsiballZ_stat.py'
Nov 24 18:38:36 compute-0 sudo[222079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:37 compute-0 python3.9[222081]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:38:37 compute-0 sudo[222079]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:37 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v682: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:37 compute-0 sudo[222204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfhkcutvbammqvqgygufgyrfwauwwvav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009516.4135208-554-185674984620875/AnsiballZ_copy.py'
Nov 24 18:38:37 compute-0 sudo[222204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:37 compute-0 python3.9[222206]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764009516.4135208-554-185674984620875/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:38:37 compute-0 sudo[222204]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:38:38 compute-0 sudo[222356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trbprlwybdzqofosrzvzlazclykpkrkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009517.7607574-554-45710354844842/AnsiballZ_stat.py'
Nov 24 18:38:38 compute-0 sudo[222356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:38 compute-0 python3.9[222358]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:38:38 compute-0 sudo[222356]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:38 compute-0 ceph-mon[74927]: pgmap v682: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:38 compute-0 sudo[222479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhvhxmzbmaeooemcddrfwvqnfjmnpaqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009517.7607574-554-45710354844842/AnsiballZ_copy.py'
Nov 24 18:38:38 compute-0 sudo[222479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:38 compute-0 python3.9[222481]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764009517.7607574-554-45710354844842/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:38:38 compute-0 sudo[222479]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:39 compute-0 sudo[222631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzywjfgtsdorxumkwdhfhgjsgscwxkmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009518.8937547-554-200368818621773/AnsiballZ_stat.py'
Nov 24 18:38:39 compute-0 sudo[222631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:39 compute-0 python3.9[222633]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:38:39 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v683: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:39 compute-0 sudo[222631]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:39 compute-0 sudo[222756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-andeolqjotnsqoqqjdfamtxhogfrvvne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009518.8937547-554-200368818621773/AnsiballZ_copy.py'
Nov 24 18:38:39 compute-0 sudo[222756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:39 compute-0 python3.9[222758]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764009518.8937547-554-200368818621773/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:38:40 compute-0 sudo[222756]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:40 compute-0 ceph-mon[74927]: pgmap v683: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:40 compute-0 sudo[222908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhmyvzhutsuaorykcccgaxxzjrqdsodt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009520.2159636-667-12570263770856/AnsiballZ_command.py'
Nov 24 18:38:40 compute-0 sudo[222908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:40 compute-0 python3.9[222910]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 24 18:38:40 compute-0 sudo[222908]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:41 compute-0 sudo[223061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgshiaribgfwsffayhjnbvjxigneifvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009520.9656498-676-99577535311090/AnsiballZ_file.py'
Nov 24 18:38:41 compute-0 sudo[223061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:41 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v684: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:41 compute-0 python3.9[223063]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:38:41 compute-0 sudo[223061]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:41 compute-0 sudo[223213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufndwhxjzcdsojxvescndkhdtfnsqvim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009521.6183746-676-196376471471640/AnsiballZ_file.py'
Nov 24 18:38:41 compute-0 sudo[223213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:42 compute-0 python3.9[223215]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:38:42 compute-0 sudo[223213]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:42 compute-0 ceph-mon[74927]: pgmap v684: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:38:42 compute-0 sudo[223365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leumrrvkgvvxsjvgizopcitgnsbfpfrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009522.3392837-676-274204692160380/AnsiballZ_file.py'
Nov 24 18:38:42 compute-0 sudo[223365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:42 compute-0 python3.9[223367]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:38:42 compute-0 sudo[223365]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:38:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:38:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:38:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:38:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:38:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:38:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:38:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:38:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:38:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:38:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:38:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:38:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 18:38:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:38:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:38:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:38:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 18:38:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:38:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 18:38:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:38:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:38:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:38:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 18:38:43 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v685: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:43 compute-0 sudo[223517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytngwpumeywdxyppecpmrkmwiamydsni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009523.050073-676-278228505374932/AnsiballZ_file.py'
Nov 24 18:38:43 compute-0 sudo[223517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:43 compute-0 python3.9[223519]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:38:43 compute-0 sudo[223517]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:44 compute-0 sudo[223669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulkncaoujgpffchmzsdvoopolwtcbkvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009523.7521806-676-107367911721458/AnsiballZ_file.py'
Nov 24 18:38:44 compute-0 sudo[223669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:44 compute-0 python3.9[223671]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:38:44 compute-0 sudo[223669]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:44 compute-0 ceph-mon[74927]: pgmap v685: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:44 compute-0 sudo[223821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcypbummqmqgxmcficvybpkgxiooxnya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009524.4538317-676-35080753208424/AnsiballZ_file.py'
Nov 24 18:38:44 compute-0 sudo[223821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:45 compute-0 python3.9[223823]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:38:45 compute-0 sudo[223821]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:45 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v686: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:45 compute-0 sudo[223973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsgdsjziynzwsrtmguakbiliwyxplbcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009525.2416499-676-115470281715324/AnsiballZ_file.py'
Nov 24 18:38:45 compute-0 sudo[223973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:45 compute-0 python3.9[223975]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:38:45 compute-0 sudo[223973]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:46 compute-0 sudo[224125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yemyezingopgixptyikbgrozhbozfily ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009525.9660592-676-47431282524419/AnsiballZ_file.py'
Nov 24 18:38:46 compute-0 sudo[224125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:46 compute-0 ceph-mon[74927]: pgmap v686: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:46 compute-0 python3.9[224127]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:38:46 compute-0 sudo[224125]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:47 compute-0 sudo[224277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxglrtbmbubzauxjtopfwgkcawzcbsai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009526.6790106-676-201618501744979/AnsiballZ_file.py'
Nov 24 18:38:47 compute-0 sudo[224277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:47 compute-0 python3.9[224279]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:38:47 compute-0 sudo[224277]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:47 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v687: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:47 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 18:38:47 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3319 writes, 14K keys, 3319 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 3319 writes, 3319 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1288 writes, 5829 keys, 1288 commit groups, 1.0 writes per commit group, ingest: 8.48 MB, 0.01 MB/s
                                           Interval WAL: 1288 writes, 1288 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     73.8      0.21              0.04         7    0.029       0      0       0.0       0.0
                                             L6      1/0    6.58 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.7    211.8    174.7      0.23              0.11         6    0.039     24K   3197       0.0       0.0
                                            Sum      1/0    6.58 MB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.7    112.2    127.2      0.44              0.15        13    0.034     24K   3197       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.9    178.8    179.3      0.19              0.09         8    0.024     17K   2466       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    211.8    174.7      0.23              0.11         6    0.039     24K   3197       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     74.2      0.20              0.04         6    0.034       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     28.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.015, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.05 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 0.4 seconds
                                           Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562af0cfd1f0#2 capacity: 308.00 MB usage: 1.64 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(106,1.43 MB,0.463709%) FilterBlock(14,75.42 KB,0.0239137%) IndexBlock(14,144.53 KB,0.0458259%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 24 18:38:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:38:47 compute-0 sudo[224429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggtvmyjasfzrpzpvyokpwvxpojwksxrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009527.4638777-676-223267767415425/AnsiballZ_file.py'
Nov 24 18:38:47 compute-0 sudo[224429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:47 compute-0 python3.9[224431]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:38:47 compute-0 sudo[224429]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:48 compute-0 sudo[224581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwygjjozcbjyoisrijezxacfhzzsbzdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009528.1348155-676-48460539915175/AnsiballZ_file.py'
Nov 24 18:38:48 compute-0 sudo[224581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:48 compute-0 ceph-mon[74927]: pgmap v687: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:48 compute-0 python3.9[224583]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:38:48 compute-0 sudo[224581]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:48 compute-0 sudo[224733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkhcgvttfahjazebpmepmidtlkcgxway ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009528.7324264-676-226704317443230/AnsiballZ_file.py'
Nov 24 18:38:48 compute-0 sudo[224733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:49 compute-0 python3.9[224735]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:38:49 compute-0 sudo[224733]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:49 compute-0 sudo[224736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:38:49 compute-0 sudo[224736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:38:49 compute-0 sudo[224736]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:49 compute-0 sudo[224784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:38:49 compute-0 sudo[224784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:38:49 compute-0 sudo[224784]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:49 compute-0 sudo[224818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:38:49 compute-0 sudo[224818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:38:49 compute-0 sudo[224818]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:49 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v688: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:49 compute-0 sudo[224867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 18:38:49 compute-0 sudo[224867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:38:49 compute-0 sudo[224995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzshsgyszkucguyduakcmhlrgaguhoiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009529.31431-676-179936619809498/AnsiballZ_file.py'
Nov 24 18:38:49 compute-0 sudo[224995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:49 compute-0 python3.9[224999]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:38:49 compute-0 sudo[224867]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:49 compute-0 sudo[224995]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:49 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:38:49 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:38:49 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:38:49 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:38:49 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:38:49 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:38:49 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 779778df-0b31-40b2-bfc3-07d9dd01cab9 does not exist
Nov 24 18:38:49 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 9ec13bae-6fe7-4455-9c3c-366f65b5fd7a does not exist
Nov 24 18:38:49 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 6e7d6fff-388c-4c10-a450-3908d4aba869 does not exist
Nov 24 18:38:49 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 18:38:49 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:38:49 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 18:38:49 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:38:49 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:38:49 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:38:49 compute-0 sudo[225023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:38:49 compute-0 sudo[225023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:38:49 compute-0 sudo[225023]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:49 compute-0 sudo[225066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:38:49 compute-0 sudo[225066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:38:49 compute-0 sudo[225066]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:50 compute-0 sudo[225118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:38:50 compute-0 sudo[225118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:38:50 compute-0 sudo[225118]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:50 compute-0 sudo[225177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 18:38:50 compute-0 sudo[225177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:38:50 compute-0 podman[225167]: 2025-11-24 18:38:50.119473722 +0000 UTC m=+0.074406510 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 18:38:50 compute-0 sudo[225295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybmppsqsfpbrsoychswanxslzptaglml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009529.9614108-676-124502092242006/AnsiballZ_file.py'
Nov 24 18:38:50 compute-0 sudo[225295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:50 compute-0 podman[225332]: 2025-11-24 18:38:50.385150102 +0000 UTC m=+0.037343319 container create 2008f009803bf10ad344f029009360983c82c626f796eb68710207c468dc46e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elbakyan, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 18:38:50 compute-0 python3.9[225308]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:38:50 compute-0 systemd[1]: Started libpod-conmon-2008f009803bf10ad344f029009360983c82c626f796eb68710207c468dc46e6.scope.
Nov 24 18:38:50 compute-0 sudo[225295]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:50 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:38:50 compute-0 podman[225332]: 2025-11-24 18:38:50.367254092 +0000 UTC m=+0.019447309 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:38:50 compute-0 podman[225332]: 2025-11-24 18:38:50.467755972 +0000 UTC m=+0.119949209 container init 2008f009803bf10ad344f029009360983c82c626f796eb68710207c468dc46e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elbakyan, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:38:50 compute-0 ceph-mon[74927]: pgmap v688: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:50 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:38:50 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:38:50 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:38:50 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:38:50 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:38:50 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:38:50 compute-0 podman[225332]: 2025-11-24 18:38:50.474706233 +0000 UTC m=+0.126899460 container start 2008f009803bf10ad344f029009360983c82c626f796eb68710207c468dc46e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 24 18:38:50 compute-0 podman[225332]: 2025-11-24 18:38:50.47743322 +0000 UTC m=+0.129626437 container attach 2008f009803bf10ad344f029009360983c82c626f796eb68710207c468dc46e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:38:50 compute-0 distracted_elbakyan[225348]: 167 167
Nov 24 18:38:50 compute-0 systemd[1]: libpod-2008f009803bf10ad344f029009360983c82c626f796eb68710207c468dc46e6.scope: Deactivated successfully.
Nov 24 18:38:50 compute-0 podman[225332]: 2025-11-24 18:38:50.479995463 +0000 UTC m=+0.132188680 container died 2008f009803bf10ad344f029009360983c82c626f796eb68710207c468dc46e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elbakyan, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 24 18:38:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-5326b85218e25a95f1f66c7cbd3745005e8d9dda781d1b48181bbcf02684e010-merged.mount: Deactivated successfully.
Nov 24 18:38:50 compute-0 podman[225332]: 2025-11-24 18:38:50.516785217 +0000 UTC m=+0.168978464 container remove 2008f009803bf10ad344f029009360983c82c626f796eb68710207c468dc46e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elbakyan, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 18:38:50 compute-0 systemd[1]: libpod-conmon-2008f009803bf10ad344f029009360983c82c626f796eb68710207c468dc46e6.scope: Deactivated successfully.
Nov 24 18:38:50 compute-0 podman[225409]: 2025-11-24 18:38:50.668438814 +0000 UTC m=+0.035575055 container create 2b595c31ab6a013a7c7ae6fe2ba30ad616dbeabe6cca3d203e79d4be9eb55ac1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:38:50 compute-0 systemd[1]: Started libpod-conmon-2b595c31ab6a013a7c7ae6fe2ba30ad616dbeabe6cca3d203e79d4be9eb55ac1.scope.
Nov 24 18:38:50 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:38:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5503b19bc255470fa142be1169d58793f1658ddb636d631870973319afa75c60/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:38:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5503b19bc255470fa142be1169d58793f1658ddb636d631870973319afa75c60/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:38:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5503b19bc255470fa142be1169d58793f1658ddb636d631870973319afa75c60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:38:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5503b19bc255470fa142be1169d58793f1658ddb636d631870973319afa75c60/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:38:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5503b19bc255470fa142be1169d58793f1658ddb636d631870973319afa75c60/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:38:50 compute-0 podman[225409]: 2025-11-24 18:38:50.653313553 +0000 UTC m=+0.020449824 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:38:50 compute-0 podman[225409]: 2025-11-24 18:38:50.762911116 +0000 UTC m=+0.130047357 container init 2b595c31ab6a013a7c7ae6fe2ba30ad616dbeabe6cca3d203e79d4be9eb55ac1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_noether, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:38:50 compute-0 podman[225409]: 2025-11-24 18:38:50.769376335 +0000 UTC m=+0.136512576 container start 2b595c31ab6a013a7c7ae6fe2ba30ad616dbeabe6cca3d203e79d4be9eb55ac1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_noether, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 24 18:38:50 compute-0 podman[225409]: 2025-11-24 18:38:50.772259046 +0000 UTC m=+0.139395287 container attach 2b595c31ab6a013a7c7ae6fe2ba30ad616dbeabe6cca3d203e79d4be9eb55ac1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_noether, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:38:50 compute-0 sudo[225543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kybwrtixxtnfsnqdwucmthurlsxwynve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009530.6421704-775-131328091318326/AnsiballZ_stat.py'
Nov 24 18:38:50 compute-0 sudo[225543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:51 compute-0 python3.9[225545]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:38:51 compute-0 sudo[225543]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:51 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v689: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:51 compute-0 sudo[225672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-runpgenihprmtsrrhjvngiaxfyzilsis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009530.6421704-775-131328091318326/AnsiballZ_copy.py'
Nov 24 18:38:51 compute-0 sudo[225672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:51 compute-0 python3.9[225676]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009530.6421704-775-131328091318326/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:38:51 compute-0 sudo[225672]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:51 compute-0 zealous_noether[225459]: --> passed data devices: 0 physical, 3 LVM
Nov 24 18:38:51 compute-0 zealous_noether[225459]: --> relative data size: 1.0
Nov 24 18:38:51 compute-0 zealous_noether[225459]: --> All data devices are unavailable
Nov 24 18:38:51 compute-0 systemd[1]: libpod-2b595c31ab6a013a7c7ae6fe2ba30ad616dbeabe6cca3d203e79d4be9eb55ac1.scope: Deactivated successfully.
Nov 24 18:38:51 compute-0 podman[225409]: 2025-11-24 18:38:51.830120096 +0000 UTC m=+1.197256347 container died 2b595c31ab6a013a7c7ae6fe2ba30ad616dbeabe6cca3d203e79d4be9eb55ac1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:38:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-5503b19bc255470fa142be1169d58793f1658ddb636d631870973319afa75c60-merged.mount: Deactivated successfully.
Nov 24 18:38:51 compute-0 podman[225409]: 2025-11-24 18:38:51.881499518 +0000 UTC m=+1.248635749 container remove 2b595c31ab6a013a7c7ae6fe2ba30ad616dbeabe6cca3d203e79d4be9eb55ac1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_noether, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:38:51 compute-0 systemd[1]: libpod-conmon-2b595c31ab6a013a7c7ae6fe2ba30ad616dbeabe6cca3d203e79d4be9eb55ac1.scope: Deactivated successfully.
Nov 24 18:38:51 compute-0 sudo[225177]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:51 compute-0 sudo[225751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:38:51 compute-0 sudo[225751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:38:51 compute-0 sudo[225751]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:52 compute-0 sudo[225805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:38:52 compute-0 sudo[225805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:38:52 compute-0 sudo[225805]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:52 compute-0 sudo[225853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:38:52 compute-0 sudo[225853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:38:52 compute-0 sudo[225853]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:52 compute-0 sudo[225890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- lvm list --format json
Nov 24 18:38:52 compute-0 sudo[225890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:38:52 compute-0 sudo[225953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtiuxogbngqmwzntrnguskfhnncsyppi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009531.9187055-775-100223039296840/AnsiballZ_stat.py'
Nov 24 18:38:52 compute-0 sudo[225953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:52 compute-0 python3.9[225955]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:38:52 compute-0 sudo[225953]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:52 compute-0 podman[225996]: 2025-11-24 18:38:52.441087232 +0000 UTC m=+0.036829076 container create 06e5af4a84a0af670db3b254a47047b53685569d0ebeb51c07f79e4a0ce3d7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lewin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:38:52 compute-0 systemd[1]: Started libpod-conmon-06e5af4a84a0af670db3b254a47047b53685569d0ebeb51c07f79e4a0ce3d7af.scope.
Nov 24 18:38:52 compute-0 ceph-mon[74927]: pgmap v689: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:52 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:38:52 compute-0 podman[225996]: 2025-11-24 18:38:52.516514536 +0000 UTC m=+0.112256430 container init 06e5af4a84a0af670db3b254a47047b53685569d0ebeb51c07f79e4a0ce3d7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lewin, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 24 18:38:52 compute-0 podman[225996]: 2025-11-24 18:38:52.423834968 +0000 UTC m=+0.019576832 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:38:52 compute-0 podman[225996]: 2025-11-24 18:38:52.524174004 +0000 UTC m=+0.119915848 container start 06e5af4a84a0af670db3b254a47047b53685569d0ebeb51c07f79e4a0ce3d7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 24 18:38:52 compute-0 podman[225996]: 2025-11-24 18:38:52.527493505 +0000 UTC m=+0.123235349 container attach 06e5af4a84a0af670db3b254a47047b53685569d0ebeb51c07f79e4a0ce3d7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lewin, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 18:38:52 compute-0 nice_lewin[226036]: 167 167
Nov 24 18:38:52 compute-0 systemd[1]: libpod-06e5af4a84a0af670db3b254a47047b53685569d0ebeb51c07f79e4a0ce3d7af.scope: Deactivated successfully.
Nov 24 18:38:52 compute-0 podman[225996]: 2025-11-24 18:38:52.529490895 +0000 UTC m=+0.125232739 container died 06e5af4a84a0af670db3b254a47047b53685569d0ebeb51c07f79e4a0ce3d7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lewin, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 18:38:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-38f4347499c94109b158539fd4e1252d23003c5acd97bee8381dbe478a6232d0-merged.mount: Deactivated successfully.
Nov 24 18:38:52 compute-0 podman[225996]: 2025-11-24 18:38:52.565698724 +0000 UTC m=+0.161440568 container remove 06e5af4a84a0af670db3b254a47047b53685569d0ebeb51c07f79e4a0ce3d7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 24 18:38:52 compute-0 systemd[1]: libpod-conmon-06e5af4a84a0af670db3b254a47047b53685569d0ebeb51c07f79e4a0ce3d7af.scope: Deactivated successfully.
Nov 24 18:38:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:38:52 compute-0 podman[226125]: 2025-11-24 18:38:52.711910168 +0000 UTC m=+0.035171385 container create 6f4a855f849bb35ea4a5d419ecdf141da02278f2a077b227547eae068138b3c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Nov 24 18:38:52 compute-0 systemd[1]: Started libpod-conmon-6f4a855f849bb35ea4a5d419ecdf141da02278f2a077b227547eae068138b3c8.scope.
Nov 24 18:38:52 compute-0 sudo[226171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhmlbjowcrbqwnwtteljrxyozmnmugfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009531.9187055-775-100223039296840/AnsiballZ_copy.py'
Nov 24 18:38:52 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:38:52 compute-0 sudo[226171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f1a8a91f95d160d4d5879a5197f16e344539ac8b4035c1aa583952789f23e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:38:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f1a8a91f95d160d4d5879a5197f16e344539ac8b4035c1aa583952789f23e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:38:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f1a8a91f95d160d4d5879a5197f16e344539ac8b4035c1aa583952789f23e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:38:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f1a8a91f95d160d4d5879a5197f16e344539ac8b4035c1aa583952789f23e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:38:52 compute-0 podman[226125]: 2025-11-24 18:38:52.782750739 +0000 UTC m=+0.106011966 container init 6f4a855f849bb35ea4a5d419ecdf141da02278f2a077b227547eae068138b3c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_dhawan, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:38:52 compute-0 podman[226125]: 2025-11-24 18:38:52.696162631 +0000 UTC m=+0.019423848 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:38:52 compute-0 podman[226125]: 2025-11-24 18:38:52.793440222 +0000 UTC m=+0.116701449 container start 6f4a855f849bb35ea4a5d419ecdf141da02278f2a077b227547eae068138b3c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_dhawan, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 24 18:38:52 compute-0 podman[226125]: 2025-11-24 18:38:52.796561268 +0000 UTC m=+0.119822595 container attach 6f4a855f849bb35ea4a5d419ecdf141da02278f2a077b227547eae068138b3c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_dhawan, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:38:52 compute-0 python3.9[226176]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009531.9187055-775-100223039296840/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:38:52 compute-0 sudo[226171]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:53 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v690: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:53 compute-0 sudo[226328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgmxtawoqwbplyydkdadqjpzslmftjol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009533.1193957-775-46771593594714/AnsiballZ_stat.py'
Nov 24 18:38:53 compute-0 sudo[226328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]: {
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:     "0": [
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:         {
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:             "devices": [
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:                 "/dev/loop3"
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:             ],
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:             "lv_name": "ceph_lv0",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:             "lv_size": "21470642176",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:             "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:             "name": "ceph_lv0",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:             "tags": {
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:                 "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:                 "ceph.cluster_name": "ceph",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:                 "ceph.crush_device_class": "",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:                 "ceph.encrypted": "0",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:                 "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:                 "ceph.osd_id": "0",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:                 "ceph.type": "block",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:                 "ceph.vdo": "0"
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:             },
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:             "type": "block",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:             "vg_name": "ceph_vg0"
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:         }
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:     ],
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:     "1": [
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:         {
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:             "devices": [
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:                 "/dev/loop4"
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:             ],
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:             "lv_name": "ceph_lv1",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:             "lv_size": "21470642176",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:             "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:             "name": "ceph_lv1",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:             "tags": {
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:                 "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:                 "ceph.cluster_name": "ceph",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:                 "ceph.crush_device_class": "",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:                 "ceph.encrypted": "0",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:                 "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:                 "ceph.osd_id": "1",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:                 "ceph.type": "block",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:                 "ceph.vdo": "0"
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:             },
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:             "type": "block",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:             "vg_name": "ceph_vg1"
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:         }
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:     ],
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:     "2": [
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:         {
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:             "devices": [
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:                 "/dev/loop5"
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:             ],
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:             "lv_name": "ceph_lv2",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:             "lv_size": "21470642176",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:             "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:             "name": "ceph_lv2",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:             "tags": {
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:                 "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:                 "ceph.cluster_name": "ceph",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:                 "ceph.crush_device_class": "",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:                 "ceph.encrypted": "0",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:                 "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:                 "ceph.osd_id": "2",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:                 "ceph.type": "block",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:                 "ceph.vdo": "0"
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:             },
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:             "type": "block",
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:             "vg_name": "ceph_vg2"
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:         }
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]:     ]
Nov 24 18:38:53 compute-0 thirsty_dhawan[226172]: }
Nov 24 18:38:53 compute-0 systemd[1]: libpod-6f4a855f849bb35ea4a5d419ecdf141da02278f2a077b227547eae068138b3c8.scope: Deactivated successfully.
Nov 24 18:38:53 compute-0 podman[226125]: 2025-11-24 18:38:53.545518665 +0000 UTC m=+0.868779892 container died 6f4a855f849bb35ea4a5d419ecdf141da02278f2a077b227547eae068138b3c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_dhawan, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:38:53 compute-0 python3.9[226330]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:38:53 compute-0 sudo[226328]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-97f1a8a91f95d160d4d5879a5197f16e344539ac8b4035c1aa583952789f23e9-merged.mount: Deactivated successfully.
Nov 24 18:38:53 compute-0 podman[226125]: 2025-11-24 18:38:53.61122404 +0000 UTC m=+0.934485257 container remove 6f4a855f849bb35ea4a5d419ecdf141da02278f2a077b227547eae068138b3c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_dhawan, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:38:53 compute-0 systemd[1]: libpod-conmon-6f4a855f849bb35ea4a5d419ecdf141da02278f2a077b227547eae068138b3c8.scope: Deactivated successfully.
Nov 24 18:38:53 compute-0 sudo[225890]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:53 compute-0 sudo[226369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:38:53 compute-0 sudo[226369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:38:53 compute-0 sudo[226369]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:53 compute-0 sudo[226417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:38:53 compute-0 sudo[226417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:38:53 compute-0 sudo[226417]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:53 compute-0 sudo[226450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:38:53 compute-0 sudo[226450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:38:53 compute-0 sudo[226450]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:53 compute-0 sudo[226493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- raw list --format json
Nov 24 18:38:53 compute-0 sudo[226493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:38:53 compute-0 podman[226490]: 2025-11-24 18:38:53.852932561 +0000 UTC m=+0.053506697 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true)
Nov 24 18:38:53 compute-0 sudo[226586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obzsnmzspnyuinngvlkgfryqgxhpmanq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009533.1193957-775-46771593594714/AnsiballZ_copy.py'
Nov 24 18:38:53 compute-0 sudo[226586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:54 compute-0 podman[226630]: 2025-11-24 18:38:54.112187462 +0000 UTC m=+0.033259768 container create 915a54bd6435a92343f633515d1b748e4edb04bfee0cc0f4a5ef93c44f18062c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_colden, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:38:54 compute-0 python3.9[226588]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009533.1193957-775-46771593594714/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:38:54 compute-0 systemd[1]: Started libpod-conmon-915a54bd6435a92343f633515d1b748e4edb04bfee0cc0f4a5ef93c44f18062c.scope.
Nov 24 18:38:54 compute-0 sudo[226586]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:54 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:38:54 compute-0 podman[226630]: 2025-11-24 18:38:54.165429441 +0000 UTC m=+0.086501767 container init 915a54bd6435a92343f633515d1b748e4edb04bfee0cc0f4a5ef93c44f18062c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_colden, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 18:38:54 compute-0 podman[226630]: 2025-11-24 18:38:54.172606667 +0000 UTC m=+0.093678973 container start 915a54bd6435a92343f633515d1b748e4edb04bfee0cc0f4a5ef93c44f18062c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:38:54 compute-0 podman[226630]: 2025-11-24 18:38:54.175531289 +0000 UTC m=+0.096603595 container attach 915a54bd6435a92343f633515d1b748e4edb04bfee0cc0f4a5ef93c44f18062c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_colden, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 18:38:54 compute-0 mystifying_colden[226646]: 167 167
Nov 24 18:38:54 compute-0 systemd[1]: libpod-915a54bd6435a92343f633515d1b748e4edb04bfee0cc0f4a5ef93c44f18062c.scope: Deactivated successfully.
Nov 24 18:38:54 compute-0 podman[226630]: 2025-11-24 18:38:54.177430306 +0000 UTC m=+0.098502612 container died 915a54bd6435a92343f633515d1b748e4edb04bfee0cc0f4a5ef93c44f18062c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_colden, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:38:54 compute-0 podman[226630]: 2025-11-24 18:38:54.097784968 +0000 UTC m=+0.018857294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:38:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbff84cb96da295f2067e22c3abad70b53a897777e06c905b56c544edb1f7ff7-merged.mount: Deactivated successfully.
Nov 24 18:38:54 compute-0 podman[226630]: 2025-11-24 18:38:54.211520454 +0000 UTC m=+0.132592760 container remove 915a54bd6435a92343f633515d1b748e4edb04bfee0cc0f4a5ef93c44f18062c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_colden, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 18:38:54 compute-0 systemd[1]: libpod-conmon-915a54bd6435a92343f633515d1b748e4edb04bfee0cc0f4a5ef93c44f18062c.scope: Deactivated successfully.
Nov 24 18:38:54 compute-0 podman[226720]: 2025-11-24 18:38:54.374320545 +0000 UTC m=+0.044618068 container create fc26ddf311eed8216313d5c4c816546ebfb748364f2ee0d9fa3d1cdee4a9175e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:38:54 compute-0 systemd[1]: Started libpod-conmon-fc26ddf311eed8216313d5c4c816546ebfb748364f2ee0d9fa3d1cdee4a9175e.scope.
Nov 24 18:38:54 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:38:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06daddb67b93084d4a471ee6fff13745efb3237bbdc485cfab7b698a2557d416/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:38:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06daddb67b93084d4a471ee6fff13745efb3237bbdc485cfab7b698a2557d416/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:38:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06daddb67b93084d4a471ee6fff13745efb3237bbdc485cfab7b698a2557d416/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:38:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06daddb67b93084d4a471ee6fff13745efb3237bbdc485cfab7b698a2557d416/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:38:54 compute-0 podman[226720]: 2025-11-24 18:38:54.356408835 +0000 UTC m=+0.026706408 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:38:54 compute-0 podman[226720]: 2025-11-24 18:38:54.452580378 +0000 UTC m=+0.122877921 container init fc26ddf311eed8216313d5c4c816546ebfb748364f2ee0d9fa3d1cdee4a9175e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 18:38:54 compute-0 podman[226720]: 2025-11-24 18:38:54.461005485 +0000 UTC m=+0.131303008 container start fc26ddf311eed8216313d5c4c816546ebfb748364f2ee0d9fa3d1cdee4a9175e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lalande, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 18:38:54 compute-0 podman[226720]: 2025-11-24 18:38:54.464779828 +0000 UTC m=+0.135077361 container attach fc26ddf311eed8216313d5c4c816546ebfb748364f2ee0d9fa3d1cdee4a9175e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lalande, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Nov 24 18:38:54 compute-0 ceph-mon[74927]: pgmap v690: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:54 compute-0 sudo[226841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxnpfqrisaxloxuvbqmpllmkjzizdrau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009534.2862272-775-9405211155211/AnsiballZ_stat.py'
Nov 24 18:38:54 compute-0 sudo[226841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:54 compute-0 python3.9[226843]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:38:54 compute-0 sudo[226841]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:55 compute-0 sudo[226966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytcepdzcbfrhhncqipklggjobilztvpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009534.2862272-775-9405211155211/AnsiballZ_copy.py'
Nov 24 18:38:55 compute-0 sudo[226966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:55 compute-0 python3.9[226970]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009534.2862272-775-9405211155211/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:38:55 compute-0 sudo[226966]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:55 compute-0 cranky_lalande[226786]: {
Nov 24 18:38:55 compute-0 cranky_lalande[226786]:     "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 18:38:55 compute-0 cranky_lalande[226786]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:38:55 compute-0 cranky_lalande[226786]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 18:38:55 compute-0 cranky_lalande[226786]:         "osd_id": 0,
Nov 24 18:38:55 compute-0 cranky_lalande[226786]:         "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:38:55 compute-0 cranky_lalande[226786]:         "type": "bluestore"
Nov 24 18:38:55 compute-0 cranky_lalande[226786]:     },
Nov 24 18:38:55 compute-0 cranky_lalande[226786]:     "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 18:38:55 compute-0 cranky_lalande[226786]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:38:55 compute-0 cranky_lalande[226786]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 18:38:55 compute-0 cranky_lalande[226786]:         "osd_id": 1,
Nov 24 18:38:55 compute-0 cranky_lalande[226786]:         "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:38:55 compute-0 cranky_lalande[226786]:         "type": "bluestore"
Nov 24 18:38:55 compute-0 cranky_lalande[226786]:     },
Nov 24 18:38:55 compute-0 cranky_lalande[226786]:     "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 18:38:55 compute-0 cranky_lalande[226786]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:38:55 compute-0 cranky_lalande[226786]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 18:38:55 compute-0 cranky_lalande[226786]:         "osd_id": 2,
Nov 24 18:38:55 compute-0 cranky_lalande[226786]:         "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:38:55 compute-0 cranky_lalande[226786]:         "type": "bluestore"
Nov 24 18:38:55 compute-0 cranky_lalande[226786]:     }
Nov 24 18:38:55 compute-0 cranky_lalande[226786]: }
Nov 24 18:38:55 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v691: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:55 compute-0 systemd[1]: libpod-fc26ddf311eed8216313d5c4c816546ebfb748364f2ee0d9fa3d1cdee4a9175e.scope: Deactivated successfully.
Nov 24 18:38:55 compute-0 conmon[226786]: conmon fc26ddf311eed8216313 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fc26ddf311eed8216313d5c4c816546ebfb748364f2ee0d9fa3d1cdee4a9175e.scope/container/memory.events
Nov 24 18:38:55 compute-0 podman[226720]: 2025-11-24 18:38:55.384066642 +0000 UTC m=+1.054364175 container died fc26ddf311eed8216313d5c4c816546ebfb748364f2ee0d9fa3d1cdee4a9175e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:38:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-06daddb67b93084d4a471ee6fff13745efb3237bbdc485cfab7b698a2557d416-merged.mount: Deactivated successfully.
Nov 24 18:38:55 compute-0 podman[226720]: 2025-11-24 18:38:55.443190045 +0000 UTC m=+1.113487588 container remove fc26ddf311eed8216313d5c4c816546ebfb748364f2ee0d9fa3d1cdee4a9175e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lalande, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 24 18:38:55 compute-0 systemd[1]: libpod-conmon-fc26ddf311eed8216313d5c4c816546ebfb748364f2ee0d9fa3d1cdee4a9175e.scope: Deactivated successfully.
Nov 24 18:38:55 compute-0 sudo[226493]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:55 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:38:55 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:38:55 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:38:55 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:38:55 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 5537b135-1332-42aa-9cf9-d6825d137d5c does not exist
Nov 24 18:38:55 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 256a0b64-18cf-4958-96f5-b18fe8fa6809 does not exist
Nov 24 18:38:55 compute-0 sudo[227054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:38:55 compute-0 sudo[227054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:38:55 compute-0 sudo[227054]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:55 compute-0 sudo[227108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:38:55 compute-0 sudo[227108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:38:55 compute-0 sudo[227108]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:55 compute-0 sudo[227206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogrffvzduhardhnwmzdlvcqhppodrqvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009535.4879532-775-142239690940572/AnsiballZ_stat.py'
Nov 24 18:38:55 compute-0 sudo[227206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:55 compute-0 python3.9[227208]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:38:55 compute-0 sudo[227206]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:56 compute-0 sudo[227329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwzfkacgistbzuxwpfrhazhtridbsvnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009535.4879532-775-142239690940572/AnsiballZ_copy.py'
Nov 24 18:38:56 compute-0 sudo[227329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:56 compute-0 ceph-mon[74927]: pgmap v691: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:56 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:38:56 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:38:56 compute-0 python3.9[227331]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009535.4879532-775-142239690940572/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:38:56 compute-0 sudo[227329]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:56 compute-0 sudo[227481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ueiyeqlgfkhvnqbvtcpxpgetsxjbhkvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009536.6571999-775-98315007989053/AnsiballZ_stat.py'
Nov 24 18:38:56 compute-0 sudo[227481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:57 compute-0 python3.9[227483]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:38:57 compute-0 sudo[227481]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:57 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v692: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:57 compute-0 sudo[227604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgyionajuoepllztzsgyulgpajzxdchi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009536.6571999-775-98315007989053/AnsiballZ_copy.py'
Nov 24 18:38:57 compute-0 sudo[227604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:38:57 compute-0 python3.9[227606]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009536.6571999-775-98315007989053/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:38:57 compute-0 sudo[227604]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:58 compute-0 sudo[227756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wucmvbsuuzgoqauhnemcoxqrbwmzjhud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009537.847678-775-249069123223463/AnsiballZ_stat.py'
Nov 24 18:38:58 compute-0 sudo[227756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:58 compute-0 python3.9[227758]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:38:58 compute-0 sudo[227756]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:58 compute-0 ceph-mon[74927]: pgmap v692: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:38:58 compute-0 sudo[227879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kainghpocfyjicqkonycojapkrqrbumf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009537.847678-775-249069123223463/AnsiballZ_copy.py'
Nov 24 18:38:58 compute-0 sudo[227879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:38:58 compute-0 python3.9[227881]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009537.847678-775-249069123223463/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:38:58 compute-0 sudo[227879]: pam_unix(sudo:session): session closed for user root
Nov 24 18:38:59 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v693: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:00 compute-0 sudo[228031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opdstbzmeflhdeuemjyjrivbjonzaofw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009539.039171-775-84894498910282/AnsiballZ_stat.py'
Nov 24 18:39:00 compute-0 sudo[228031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:00 compute-0 python3.9[228033]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:39:00 compute-0 sudo[228031]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:00 compute-0 sudo[228154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siisynqjbqwgyqezcfqsmpwuepdcefps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009539.039171-775-84894498910282/AnsiballZ_copy.py'
Nov 24 18:39:00 compute-0 sudo[228154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:00 compute-0 python3.9[228156]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009539.039171-775-84894498910282/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:39:00 compute-0 ceph-mon[74927]: pgmap v693: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:00 compute-0 sudo[228154]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:01 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v694: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:01 compute-0 sudo[228306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbbqyirkmxkrhdyhvirlpqiohtbehbgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009541.0799706-775-8435775658737/AnsiballZ_stat.py'
Nov 24 18:39:01 compute-0 sudo[228306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:01 compute-0 python3.9[228308]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:39:01 compute-0 sudo[228306]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:01 compute-0 sudo[228429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjupxhxnzexbmwzlbvzgsizjuxagglib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009541.0799706-775-8435775658737/AnsiballZ_copy.py'
Nov 24 18:39:01 compute-0 sudo[228429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:02 compute-0 python3.9[228431]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009541.0799706-775-8435775658737/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:39:02 compute-0 sudo[228429]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:02 compute-0 sudo[228581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nelovuzxbjbhwlodaywirnxqzzafrjxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009542.3209498-775-9963113335835/AnsiballZ_stat.py'
Nov 24 18:39:02 compute-0 sudo[228581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:39:02 compute-0 python3.9[228583]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:39:02 compute-0 sudo[228581]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:02 compute-0 ceph-mon[74927]: pgmap v694: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:03 compute-0 sudo[228704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwsjpuqmfsaldeeeaqoxdszvzilbdied ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009542.3209498-775-9963113335835/AnsiballZ_copy.py'
Nov 24 18:39:03 compute-0 sudo[228704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:03 compute-0 python3.9[228706]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009542.3209498-775-9963113335835/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:39:03 compute-0 sudo[228704]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:03 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v695: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:03 compute-0 sudo[228856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obrjfickuflssqbiaxleqllwuzlnybql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009543.4839156-775-177273951310893/AnsiballZ_stat.py'
Nov 24 18:39:03 compute-0 sudo[228856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:04 compute-0 python3.9[228858]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:39:04 compute-0 sudo[228856]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:04 compute-0 sudo[228979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezulpjzejonznxeqqegicmeendelqfyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009543.4839156-775-177273951310893/AnsiballZ_copy.py'
Nov 24 18:39:04 compute-0 sudo[228979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:04 compute-0 python3.9[228981]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009543.4839156-775-177273951310893/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:39:04 compute-0 sudo[228979]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:39:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:39:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:39:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:39:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:39:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:39:05 compute-0 sudo[229131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sppxcytwjcypsytzqubiulbqccayedlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009544.7428737-775-146826171813683/AnsiballZ_stat.py'
Nov 24 18:39:05 compute-0 sudo[229131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:05 compute-0 python3.9[229133]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:39:05 compute-0 ceph-mon[74927]: pgmap v695: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:05 compute-0 sudo[229131]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:05 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v696: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:05 compute-0 sudo[229254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yntxoztzlyensnbzhvrfchqoxuzkbdkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009544.7428737-775-146826171813683/AnsiballZ_copy.py'
Nov 24 18:39:05 compute-0 sudo[229254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:05 compute-0 python3.9[229256]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009544.7428737-775-146826171813683/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:39:05 compute-0 sudo[229254]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:06 compute-0 sudo[229406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luabmfkgjlcenmutmbgossesbbvkihve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009545.9110618-775-213600914494861/AnsiballZ_stat.py'
Nov 24 18:39:06 compute-0 sudo[229406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:06 compute-0 python3.9[229408]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:39:06 compute-0 sudo[229406]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:06 compute-0 ceph-mon[74927]: pgmap v696: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:06 compute-0 sudo[229529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agtmyrjwyqlalseqqpldxukbgmrvovee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009545.9110618-775-213600914494861/AnsiballZ_copy.py'
Nov 24 18:39:06 compute-0 sudo[229529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:06 compute-0 python3.9[229531]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009545.9110618-775-213600914494861/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:39:07 compute-0 sudo[229529]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:07 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v697: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:07 compute-0 sudo[229681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfpolahiknbksluhqktbonogsrjohoxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009547.1536822-775-92031344019890/AnsiballZ_stat.py'
Nov 24 18:39:07 compute-0 sudo[229681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:07 compute-0 python3.9[229683]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:39:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:39:07 compute-0 sudo[229681]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:08 compute-0 sudo[229804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwfvvqqqjciuoorybtcmuhkvlhjsyvux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009547.1536822-775-92031344019890/AnsiballZ_copy.py'
Nov 24 18:39:08 compute-0 sudo[229804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:08 compute-0 python3.9[229806]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009547.1536822-775-92031344019890/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:39:08 compute-0 sudo[229804]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:09 compute-0 ceph-mon[74927]: pgmap v697: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:09 compute-0 python3.9[229956]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:39:09 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v698: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:09 compute-0 sudo[230109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yejvljfznoeotfzoviuvjpkgppbrinsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009549.3684702-981-212527607786000/AnsiballZ_seboolean.py'
Nov 24 18:39:09 compute-0 sudo[230109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:10 compute-0 python3.9[230111]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 24 18:39:11 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v699: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:12 compute-0 ceph-mon[74927]: pgmap v698: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:39:12 compute-0 sudo[230109]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:13 compute-0 ceph-mon[74927]: pgmap v699: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:13 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v700: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:13 compute-0 sudo[230266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewlpmqbeeztmfekgpzenxhggmhxtnzfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009553.1716893-989-63949977664380/AnsiballZ_copy.py'
Nov 24 18:39:13 compute-0 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 24 18:39:13 compute-0 sudo[230266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:13 compute-0 python3.9[230268]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:39:13 compute-0 sudo[230266]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:14 compute-0 sudo[230418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edjwwobtfsgvaurggtcvdqrgoftminel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009553.830123-989-68907870011161/AnsiballZ_copy.py'
Nov 24 18:39:14 compute-0 sudo[230418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:14 compute-0 python3.9[230420]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:39:14 compute-0 sudo[230418]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:14 compute-0 ceph-mon[74927]: pgmap v700: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:14 compute-0 sudo[230570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtjxrxmwwnxalptixqiidwveklyjrklh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009554.4436276-989-70923051355159/AnsiballZ_copy.py'
Nov 24 18:39:14 compute-0 sudo[230570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:14 compute-0 python3.9[230572]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:39:14 compute-0 sudo[230570]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:15 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v701: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:15 compute-0 sudo[230722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqpoujabjesswibchxzmognvqzsakdnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009555.1010919-989-61622256196640/AnsiballZ_copy.py'
Nov 24 18:39:15 compute-0 sudo[230722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:15 compute-0 python3.9[230724]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:39:15 compute-0 sudo[230722]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:16 compute-0 sudo[230874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifasxvctsdddsdhduncprptnuevekspv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009555.8455436-989-74414142263744/AnsiballZ_copy.py'
Nov 24 18:39:16 compute-0 sudo[230874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:16 compute-0 python3.9[230876]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:39:16 compute-0 sudo[230874]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:16 compute-0 ceph-mon[74927]: pgmap v701: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:16 compute-0 sudo[231026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egsloolwsbhflnfmgruatyjkwxwudroj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009556.5485408-1025-127689180423254/AnsiballZ_copy.py'
Nov 24 18:39:16 compute-0 sudo[231026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:17 compute-0 python3.9[231028]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:39:17 compute-0 sudo[231026]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:17 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v702: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:17 compute-0 sudo[231178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysltvwgdyttcuvrgxdeqegefrnvgjpyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009557.1733093-1025-244862242603326/AnsiballZ_copy.py'
Nov 24 18:39:17 compute-0 sudo[231178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:17 compute-0 python3.9[231180]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:39:17 compute-0 sudo[231178]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:39:18 compute-0 sudo[231330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgehxrpskcmwevtrhestygrftuttqwke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009557.7767973-1025-253177037969548/AnsiballZ_copy.py'
Nov 24 18:39:18 compute-0 sudo[231330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:18 compute-0 python3.9[231332]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:39:18 compute-0 sudo[231330]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:18 compute-0 ceph-mon[74927]: pgmap v702: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:18 compute-0 sudo[231482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usdgoixfwgbrotffogojvodhyhmdgqtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009558.5334358-1025-175369467277069/AnsiballZ_copy.py'
Nov 24 18:39:18 compute-0 sudo[231482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:18 compute-0 python3.9[231484]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:39:18 compute-0 sudo[231482]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:19 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v703: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:19 compute-0 sudo[231634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glumnkdzkubgylnjqbypdnvutlzlosrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009559.1472116-1025-71224952685830/AnsiballZ_copy.py'
Nov 24 18:39:19 compute-0 sudo[231634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:19 compute-0 python3.9[231636]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:39:19 compute-0 sudo[231634]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:20 compute-0 sudo[231803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsafkbgkirwxrihcyhkuzjjmmnbsufmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009559.8870466-1061-241467411308106/AnsiballZ_systemd.py'
Nov 24 18:39:20 compute-0 sudo[231803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:20 compute-0 podman[231760]: 2025-11-24 18:39:20.379036612 +0000 UTC m=+0.120064582 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Nov 24 18:39:20 compute-0 python3.9[231808]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 18:39:20 compute-0 systemd[1]: Reloading.
Nov 24 18:39:20 compute-0 ceph-mon[74927]: pgmap v703: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:20 compute-0 systemd-rc-local-generator[231843]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:39:20 compute-0 systemd-sysv-generator[231846]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:39:21 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Nov 24 18:39:21 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Nov 24 18:39:21 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 24 18:39:21 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 24 18:39:21 compute-0 systemd[1]: Starting libvirt logging daemon...
Nov 24 18:39:21 compute-0 systemd[1]: Started libvirt logging daemon.
Nov 24 18:39:21 compute-0 sudo[231803]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:21 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v704: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:21 compute-0 sudo[232006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epmbjogwxkaftbmszggijoduhuevfekq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009561.354288-1061-42727850385283/AnsiballZ_systemd.py'
Nov 24 18:39:21 compute-0 sudo[232006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:22 compute-0 python3.9[232008]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 18:39:22 compute-0 systemd[1]: Reloading.
Nov 24 18:39:22 compute-0 systemd-rc-local-generator[232035]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:39:22 compute-0 systemd-sysv-generator[232040]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:39:22 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 24 18:39:22 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 24 18:39:22 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 24 18:39:22 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 24 18:39:22 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 24 18:39:22 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 24 18:39:22 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Nov 24 18:39:22 compute-0 systemd[1]: Started libvirt nodedev daemon.
Nov 24 18:39:22 compute-0 sudo[232006]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:39:22 compute-0 ceph-mon[74927]: pgmap v704: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:39:22.731 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:39:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:39:22.733 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:39:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:39:22.733 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:39:22 compute-0 sudo[232222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwmuzhdsxplefjgqxtrwcrcmjfyzrfzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009562.6420636-1061-102465752187965/AnsiballZ_systemd.py'
Nov 24 18:39:22 compute-0 sudo[232222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:23 compute-0 python3.9[232224]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 18:39:23 compute-0 systemd[1]: Reloading.
Nov 24 18:39:23 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v705: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:23 compute-0 systemd-rc-local-generator[232248]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:39:23 compute-0 systemd-sysv-generator[232252]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:39:23 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 24 18:39:23 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 24 18:39:23 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 24 18:39:23 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 24 18:39:23 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 24 18:39:23 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 24 18:39:23 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 24 18:39:23 compute-0 sudo[232222]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:23 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 24 18:39:23 compute-0 podman[232332]: 2025-11-24 18:39:23.978889467 +0000 UTC m=+0.065621024 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 18:39:24 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 24 18:39:24 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 24 18:39:24 compute-0 sudo[232460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crbsznxusrjqmddjqueydeinvkpfythb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009563.886872-1061-100589034698503/AnsiballZ_systemd.py'
Nov 24 18:39:24 compute-0 sudo[232460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:24 compute-0 python3.9[232462]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 18:39:24 compute-0 systemd[1]: Reloading.
Nov 24 18:39:24 compute-0 systemd-rc-local-generator[232492]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:39:24 compute-0 systemd-sysv-generator[232495]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:39:24 compute-0 ceph-mon[74927]: pgmap v705: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:24 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Nov 24 18:39:24 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 24 18:39:24 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 24 18:39:24 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 24 18:39:24 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 24 18:39:24 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 24 18:39:24 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 24 18:39:24 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 24 18:39:24 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 24 18:39:24 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 24 18:39:24 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Nov 24 18:39:24 compute-0 systemd[1]: Started libvirt QEMU daemon.
Nov 24 18:39:25 compute-0 sudo[232460]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:25 compute-0 setroubleshoot[232261]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 551798ee-1cc8-459a-bbd8-c0d1c25a6a72
Nov 24 18:39:25 compute-0 setroubleshoot[232261]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 24 18:39:25 compute-0 setroubleshoot[232261]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 551798ee-1cc8-459a-bbd8-c0d1c25a6a72
Nov 24 18:39:25 compute-0 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 18:39:25 compute-0 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 18:39:25 compute-0 setroubleshoot[232261]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 24 18:39:25 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v706: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:25 compute-0 sudo[232678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szazapbhztsbzonsjbhpicbgnvmlbekf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009565.1908464-1061-150456086287358/AnsiballZ_systemd.py'
Nov 24 18:39:25 compute-0 sudo[232678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:25 compute-0 python3.9[232680]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 18:39:25 compute-0 systemd[1]: Reloading.
Nov 24 18:39:26 compute-0 systemd-sysv-generator[232709]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:39:26 compute-0 systemd-rc-local-generator[232705]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:39:26 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Nov 24 18:39:26 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Nov 24 18:39:26 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 24 18:39:26 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 24 18:39:26 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 24 18:39:26 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 24 18:39:26 compute-0 systemd[1]: Starting libvirt secret daemon...
Nov 24 18:39:26 compute-0 systemd[1]: Started libvirt secret daemon.
Nov 24 18:39:26 compute-0 sudo[232678]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:26 compute-0 ceph-mon[74927]: pgmap v706: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:26 compute-0 sudo[232889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxrarxsmcscaxosgetkpfxvtqngalvpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009566.644677-1098-242996347630256/AnsiballZ_file.py'
Nov 24 18:39:26 compute-0 sudo[232889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:27 compute-0 python3.9[232891]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:39:27 compute-0 sudo[232889]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:27 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v707: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:27 compute-0 sudo[233041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjfnmghdoijjoqaskzsprvmaqftgmkwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009567.309065-1106-49114555814648/AnsiballZ_find.py'
Nov 24 18:39:27 compute-0 sudo[233041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:39:27 compute-0 python3.9[233043]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 18:39:27 compute-0 sudo[233041]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:28 compute-0 sudo[233193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzspifuodhqvkephizwztmfswtvittyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009568.0415316-1114-233602832437457/AnsiballZ_command.py'
Nov 24 18:39:28 compute-0 sudo[233193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:28 compute-0 python3.9[233195]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:39:28 compute-0 sudo[233193]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:29 compute-0 ceph-mon[74927]: pgmap v707: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:29 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v708: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:29 compute-0 python3.9[233349]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 18:39:30 compute-0 python3.9[233499]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:39:30 compute-0 ceph-mon[74927]: pgmap v708: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:30 compute-0 python3.9[233620]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764009569.829705-1133-202693258846034/.source.xml follow=False _original_basename=secret.xml.j2 checksum=c2fabb65dd6b649e2c3b161b54086479a3dfe11a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:39:31 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v709: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:31 compute-0 sudo[233770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iilgdfvesbutdgqlunqvfpjnrsjppwqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009571.2418182-1148-29874553970332/AnsiballZ_command.py'
Nov 24 18:39:31 compute-0 sudo[233770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:31 compute-0 python3.9[233772]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine e5ee928f-099b-569b-93c9-ecf025cbb50d
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:39:31 compute-0 polkitd[43339]: Registered Authentication Agent for unix-process:233774:330354 (system bus name :1.3404 [pkttyagent --process 233774 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 24 18:39:31 compute-0 polkitd[43339]: Unregistered Authentication Agent for unix-process:233774:330354 (system bus name :1.3404, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 24 18:39:31 compute-0 polkitd[43339]: Registered Authentication Agent for unix-process:233773:330353 (system bus name :1.3405 [pkttyagent --process 233773 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 24 18:39:31 compute-0 polkitd[43339]: Unregistered Authentication Agent for unix-process:233773:330353 (system bus name :1.3405, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 24 18:39:32 compute-0 sudo[233770]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:32 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:39:32 compute-0 python3.9[233934]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:39:32 compute-0 ceph-mon[74927]: pgmap v709: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:33 compute-0 sudo[234084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojejfmujbfrtrtxslmvsxskrsrntyrlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009573.026711-1164-171915230698297/AnsiballZ_command.py'
Nov 24 18:39:33 compute-0 sudo[234084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:33 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v710: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:33 compute-0 sudo[234084]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:34 compute-0 sudo[234237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-diumwkcargrlovzqzranpoafxqvsbvvu ; FSID=e5ee928f-099b-569b-93c9-ecf025cbb50d KEY=AQBqoSRpAAAAABAAwYZz6MMXWB3V3iQXlmOz0w== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009573.8104038-1172-279856717322891/AnsiballZ_command.py'
Nov 24 18:39:34 compute-0 sudo[234237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:34 compute-0 polkitd[43339]: Registered Authentication Agent for unix-process:234240:330593 (system bus name :1.3408 [pkttyagent --process 234240 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 24 18:39:34 compute-0 polkitd[43339]: Unregistered Authentication Agent for unix-process:234240:330593 (system bus name :1.3408, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 24 18:39:34 compute-0 sudo[234237]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:39:34
Nov 24 18:39:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:39:34 compute-0 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 18:39:34 compute-0 ceph-mgr[75218]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'volumes', 'vms']
Nov 24 18:39:34 compute-0 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 18:39:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:39:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:39:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:39:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:39:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:39:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:39:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:39:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:39:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:39:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:39:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:39:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:39:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:39:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:39:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:39:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:39:34 compute-0 sudo[234395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhwdpcraukojqbaicjkfpvetmrulsdyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009574.557114-1180-12665175893032/AnsiballZ_copy.py'
Nov 24 18:39:34 compute-0 sudo[234395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:35 compute-0 python3.9[234397]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:39:35 compute-0 sudo[234395]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:35 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 24 18:39:35 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 24 18:39:35 compute-0 ceph-mon[74927]: pgmap v710: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:35 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v711: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:35 compute-0 sudo[234547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfzdkhevswrlefpqzatbqalnlmfcsksx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009575.2489197-1188-126197433940445/AnsiballZ_stat.py'
Nov 24 18:39:35 compute-0 sudo[234547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:35 compute-0 python3.9[234549]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:39:35 compute-0 sudo[234547]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:36 compute-0 sudo[234670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itpuvajmexmvetgbsxijmgalemwxqqpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009575.2489197-1188-126197433940445/AnsiballZ_copy.py'
Nov 24 18:39:36 compute-0 sudo[234670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:36 compute-0 python3.9[234672]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764009575.2489197-1188-126197433940445/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:39:36 compute-0 sudo[234670]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:36 compute-0 sudo[234822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbjhnbdntussazytynkcqjtmlvezffxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009576.6678045-1204-60300127596239/AnsiballZ_file.py'
Nov 24 18:39:36 compute-0 sudo[234822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:37 compute-0 python3.9[234824]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:39:37 compute-0 sudo[234822]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:37 compute-0 ceph-mon[74927]: pgmap v711: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:37 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v712: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:39:37 compute-0 sudo[234974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcdecuixlnyelmnclnvzkpohynhvtgrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009577.4044664-1212-112352272048861/AnsiballZ_stat.py'
Nov 24 18:39:37 compute-0 sudo[234974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:37 compute-0 python3.9[234976]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:39:37 compute-0 sudo[234974]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:38 compute-0 sudo[235052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gftenejjlhmyciurhgmhygiozeadghpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009577.4044664-1212-112352272048861/AnsiballZ_file.py'
Nov 24 18:39:38 compute-0 sudo[235052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:38 compute-0 python3.9[235054]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:39:38 compute-0 sudo[235052]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:38 compute-0 ceph-mon[74927]: pgmap v712: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:38 compute-0 sudo[235204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdiqrsujryofacyxxlagniffjvexsbxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009578.521442-1224-269520812319280/AnsiballZ_stat.py'
Nov 24 18:39:38 compute-0 sudo[235204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:38 compute-0 python3.9[235206]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:39:38 compute-0 sudo[235204]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:39 compute-0 sudo[235282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipomepfwozekqsygvzeszzgwfbnnkhjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009578.521442-1224-269520812319280/AnsiballZ_file.py'
Nov 24 18:39:39 compute-0 sudo[235282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:39 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v713: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:39 compute-0 python3.9[235284]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.wsujy5u1 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:39:39 compute-0 sudo[235282]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:39 compute-0 sudo[235434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zayxtntrmoqzpbwlruvfdcevszbjqylf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009579.6010542-1236-18419024765291/AnsiballZ_stat.py'
Nov 24 18:39:39 compute-0 sudo[235434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:40 compute-0 python3.9[235436]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:39:40 compute-0 sudo[235434]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:40 compute-0 sudo[235512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeqzqluiqtuhbhfiknnsbmdrurqowoyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009579.6010542-1236-18419024765291/AnsiballZ_file.py'
Nov 24 18:39:40 compute-0 sudo[235512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:40 compute-0 python3.9[235514]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:39:40 compute-0 sudo[235512]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:40 compute-0 ceph-mon[74927]: pgmap v713: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:41 compute-0 sudo[235664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlqsflcddjcywmkmkitggpsqcgxomqyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009580.8044505-1249-75021362335237/AnsiballZ_command.py'
Nov 24 18:39:41 compute-0 sudo[235664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:41 compute-0 python3.9[235666]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:39:41 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v714: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:41 compute-0 sudo[235664]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:42 compute-0 sudo[235817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmszviqxhyxogfaewmihmolhpfolpvbr ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764009581.5927052-1257-124437771113100/AnsiballZ_edpm_nftables_from_files.py'
Nov 24 18:39:42 compute-0 sudo[235817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:42 compute-0 python3[235819]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 24 18:39:42 compute-0 sudo[235817]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:39:42 compute-0 sudo[235969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qetirsedqbdnbzjhkkbphjilfpqtrazv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009582.6141658-1265-187330711738795/AnsiballZ_stat.py'
Nov 24 18:39:42 compute-0 sudo[235969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:43 compute-0 python3.9[235971]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:39:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:39:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:39:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:39:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:39:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:39:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:39:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:39:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:39:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:39:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:39:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:39:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:39:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 18:39:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:39:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:39:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:39:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 18:39:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:39:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 18:39:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:39:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:39:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:39:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 18:39:43 compute-0 sudo[235969]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:43 compute-0 ceph-mon[74927]: pgmap v714: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:43 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v715: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:43 compute-0 sudo[236047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwlppxarbphnxbeqkpvuufsyzwewzdzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009582.6141658-1265-187330711738795/AnsiballZ_file.py'
Nov 24 18:39:43 compute-0 sudo[236047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:43 compute-0 python3.9[236049]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:39:43 compute-0 sudo[236047]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:44 compute-0 sudo[236199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtfqeeojoieaaabevxafbqcxyfruurib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009583.7623963-1277-235895243071674/AnsiballZ_stat.py'
Nov 24 18:39:44 compute-0 sudo[236199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:44 compute-0 python3.9[236201]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:39:44 compute-0 sudo[236199]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:44 compute-0 sudo[236277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfqweofepwgfkbawlwpwfbjnnkprqcfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009583.7623963-1277-235895243071674/AnsiballZ_file.py'
Nov 24 18:39:44 compute-0 sudo[236277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:44 compute-0 ceph-mon[74927]: pgmap v715: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:44 compute-0 python3.9[236279]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:39:44 compute-0 sudo[236277]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:45 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v716: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:45 compute-0 sudo[236429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqzbzwsormneuewulshkbbxyckxmocrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009585.128884-1289-48429577102924/AnsiballZ_stat.py'
Nov 24 18:39:45 compute-0 sudo[236429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:45 compute-0 python3.9[236431]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:39:45 compute-0 sudo[236429]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:46 compute-0 sudo[236507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udjtebnimfwcuzvpnxeqdsaqfsefgodz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009585.128884-1289-48429577102924/AnsiballZ_file.py'
Nov 24 18:39:46 compute-0 sudo[236507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:46 compute-0 python3.9[236509]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:39:46 compute-0 sudo[236507]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:46 compute-0 sudo[236659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptsnardflgzbjzsycalgeznkrfpvfxke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009586.488886-1301-39919197835979/AnsiballZ_stat.py'
Nov 24 18:39:46 compute-0 sudo[236659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:47 compute-0 python3.9[236661]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:39:47 compute-0 ceph-mon[74927]: pgmap v716: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:47 compute-0 sudo[236659]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:47 compute-0 sudo[236737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syulkikoqcypwbjduizawdyoqydshcgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009586.488886-1301-39919197835979/AnsiballZ_file.py'
Nov 24 18:39:47 compute-0 sudo[236737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:47 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v717: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:47 compute-0 python3.9[236739]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:39:47 compute-0 sudo[236737]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:39:48 compute-0 sudo[236889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvftnmeegmlbckjhxitfonogzhcglacu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009587.7336202-1313-59936813877611/AnsiballZ_stat.py'
Nov 24 18:39:48 compute-0 sudo[236889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:48 compute-0 python3.9[236891]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:39:48 compute-0 sudo[236889]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:48 compute-0 sudo[237014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhyttjlbxwbistakwzeyiaepdbqktwjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009587.7336202-1313-59936813877611/AnsiballZ_copy.py'
Nov 24 18:39:48 compute-0 sudo[237014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:48 compute-0 python3.9[237016]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009587.7336202-1313-59936813877611/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:39:49 compute-0 sudo[237014]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:49 compute-0 ceph-mon[74927]: pgmap v717: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:49 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v718: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:49 compute-0 sudo[237166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktgvtqeoqrlyguswkmwdaxwuvnqlqsqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009589.198784-1328-183033091661757/AnsiballZ_file.py'
Nov 24 18:39:49 compute-0 sudo[237166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:49 compute-0 python3.9[237168]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:39:49 compute-0 sudo[237166]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:50 compute-0 sudo[237318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqrhvucoqvlqpagphuticqbctdqojjzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009589.8952613-1336-134823873275094/AnsiballZ_command.py'
Nov 24 18:39:50 compute-0 sudo[237318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:50 compute-0 python3.9[237320]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:39:50 compute-0 sudo[237318]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:51 compute-0 podman[237400]: 2025-11-24 18:39:51.027587555 +0000 UTC m=+0.113153853 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 24 18:39:51 compute-0 sudo[237500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnspcbzxgfjljfshoucozlzuqmvlwzfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009590.6771572-1344-183712236190282/AnsiballZ_blockinfile.py'
Nov 24 18:39:51 compute-0 sudo[237500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:51 compute-0 ceph-mon[74927]: pgmap v718: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:51 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v719: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:51 compute-0 python3.9[237502]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:39:51 compute-0 sudo[237500]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:52 compute-0 sudo[237652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyvkffqusuydpeanbecgfbnatbngcuvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009591.6636136-1353-151257107578064/AnsiballZ_command.py'
Nov 24 18:39:52 compute-0 sudo[237652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:52 compute-0 python3.9[237654]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:39:52 compute-0 sudo[237652]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:52 compute-0 ceph-mon[74927]: pgmap v719: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:39:52 compute-0 sudo[237805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzsukrpfsumfcikfamddiwphvgbsnofy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009592.4518707-1361-188973597724115/AnsiballZ_stat.py'
Nov 24 18:39:52 compute-0 sudo[237805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:52 compute-0 python3.9[237807]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:39:53 compute-0 sudo[237805]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:53 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v720: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:53 compute-0 sudo[237959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beqoccoivlbhbpdvzdxozfgirjepavwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009593.2097468-1369-268398175094554/AnsiballZ_command.py'
Nov 24 18:39:53 compute-0 sudo[237959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:53 compute-0 python3.9[237961]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:39:53 compute-0 sudo[237959]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:54 compute-0 podman[238088]: 2025-11-24 18:39:54.342050132 +0000 UTC m=+0.051516577 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:39:54 compute-0 sudo[238133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thvbvlmryqwpeyenqhqdlrnuhclbfkhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009594.0420773-1377-90137199583030/AnsiballZ_file.py'
Nov 24 18:39:54 compute-0 sudo[238133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:54 compute-0 python3.9[238136]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:39:54 compute-0 sudo[238133]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:54 compute-0 ceph-mon[74927]: pgmap v720: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:55 compute-0 sudo[238286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwecwuyuihplbvwjfgobzfihsqtbucjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009594.8343933-1385-15323831221177/AnsiballZ_stat.py'
Nov 24 18:39:55 compute-0 sudo[238286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:55 compute-0 python3.9[238288]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:39:55 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v721: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:55 compute-0 sudo[238286]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:55 compute-0 sudo[238359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:39:55 compute-0 sudo[238359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:39:55 compute-0 sudo[238359]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:55 compute-0 sudo[238408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:39:55 compute-0 sudo[238408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:39:55 compute-0 sudo[238408]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:55 compute-0 sudo[238457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htmircmvgnyfeyjsgmrliadevgpkjbdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009594.8343933-1385-15323831221177/AnsiballZ_copy.py'
Nov 24 18:39:55 compute-0 sudo[238457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:55 compute-0 sudo[238462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:39:55 compute-0 sudo[238462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:39:55 compute-0 sudo[238462]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:55 compute-0 sudo[238487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 24 18:39:55 compute-0 sudo[238487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:39:56 compute-0 python3.9[238461]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764009594.8343933-1385-15323831221177/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:39:56 compute-0 sudo[238457]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:56 compute-0 sudo[238487]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:56 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:39:56 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:39:56 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:39:56 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:39:56 compute-0 sudo[238681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhqyobijnkmewjpmdtfrsrtuwypnzojh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009596.216751-1400-108416072883157/AnsiballZ_stat.py'
Nov 24 18:39:56 compute-0 sudo[238681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:56 compute-0 sudo[238682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:39:56 compute-0 sudo[238682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:39:56 compute-0 sudo[238682]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:56 compute-0 sudo[238709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:39:56 compute-0 sudo[238709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:39:56 compute-0 sudo[238709]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:56 compute-0 sudo[238734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:39:56 compute-0 sudo[238734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:39:56 compute-0 sudo[238734]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:56 compute-0 python3.9[238688]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:39:56 compute-0 sudo[238759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 18:39:56 compute-0 sudo[238759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:39:56 compute-0 sudo[238681]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:57 compute-0 sudo[238921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwvdahavfemkfdrohotlylorlozjzkmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009596.216751-1400-108416072883157/AnsiballZ_copy.py'
Nov 24 18:39:57 compute-0 sudo[238921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:57 compute-0 sudo[238759]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:39:57 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:39:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:39:57 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:39:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:39:57 compute-0 python3.9[238925]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764009596.216751-1400-108416072883157/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:39:57 compute-0 sudo[238921]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:57 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v722: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:57 compute-0 ceph-mon[74927]: pgmap v721: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:57 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:39:57 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:39:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:39:57 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:39:57 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 0011bb82-7580-4ec0-ae7d-afb17da63679 does not exist
Nov 24 18:39:57 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 4d92a59b-123b-41b8-bf93-83aa14acf33e does not exist
Nov 24 18:39:57 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 79bb678b-ee33-4828-935d-8e3f7af82ff5 does not exist
Nov 24 18:39:57 compute-0 sudo[239087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztjzfcxuujmnurpcmtpqqyxoszkmtkpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009597.576854-1415-126753289083347/AnsiballZ_stat.py'
Nov 24 18:39:58 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 18:39:58 compute-0 sudo[239087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:58 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:39:58 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 18:39:58 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:39:58 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:39:58 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:39:58 compute-0 sudo[239090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:39:58 compute-0 sudo[239090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:39:58 compute-0 sudo[239090]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:58 compute-0 sudo[239115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:39:58 compute-0 sudo[239115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:39:58 compute-0 sudo[239115]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:58 compute-0 python3.9[239089]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:39:58 compute-0 sudo[239087]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:58 compute-0 sudo[239140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:39:58 compute-0 sudo[239140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:39:58 compute-0 sudo[239140]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:58 compute-0 sudo[239165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 18:39:58 compute-0 sudo[239165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:39:59 compute-0 sudo[239362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erxxenanhivuornqigojlmsgxhwladus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009597.576854-1415-126753289083347/AnsiballZ_copy.py'
Nov 24 18:39:59 compute-0 sudo[239362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:39:59 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v723: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:59 compute-0 podman[239332]: 2025-11-24 18:39:59.298604639 +0000 UTC m=+0.023195455 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:39:59 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:39:59 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:39:59 compute-0 ceph-mon[74927]: pgmap v722: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:39:59 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:39:59 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:39:59 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:39:59 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:39:59 compute-0 python3.9[239367]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764009597.576854-1415-126753289083347/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:39:59 compute-0 sudo[239362]: pam_unix(sudo:session): session closed for user root
Nov 24 18:39:59 compute-0 podman[239332]: 2025-11-24 18:39:59.702142013 +0000 UTC m=+0.426732819 container create c30801816852cb4a42db06425d28fa2c8b06d9eb23a8435fc3e7c7abf2a3a736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 18:39:59 compute-0 systemd[1]: Started libpod-conmon-c30801816852cb4a42db06425d28fa2c8b06d9eb23a8435fc3e7c7abf2a3a736.scope.
Nov 24 18:39:59 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:39:59 compute-0 podman[239332]: 2025-11-24 18:39:59.985959303 +0000 UTC m=+0.710550079 container init c30801816852cb4a42db06425d28fa2c8b06d9eb23a8435fc3e7c7abf2a3a736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 18:39:59 compute-0 podman[239332]: 2025-11-24 18:39:59.996093954 +0000 UTC m=+0.720684750 container start c30801816852cb4a42db06425d28fa2c8b06d9eb23a8435fc3e7c7abf2a3a736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_moore, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 18:40:00 compute-0 keen_moore[239446]: 167 167
Nov 24 18:40:00 compute-0 systemd[1]: libpod-c30801816852cb4a42db06425d28fa2c8b06d9eb23a8435fc3e7c7abf2a3a736.scope: Deactivated successfully.
Nov 24 18:40:00 compute-0 podman[239332]: 2025-11-24 18:40:00.053891205 +0000 UTC m=+0.778482011 container attach c30801816852cb4a42db06425d28fa2c8b06d9eb23a8435fc3e7c7abf2a3a736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:40:00 compute-0 podman[239332]: 2025-11-24 18:40:00.054701195 +0000 UTC m=+0.779292001 container died c30801816852cb4a42db06425d28fa2c8b06d9eb23a8435fc3e7c7abf2a3a736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_moore, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:40:00 compute-0 sudo[239532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qretqbctqtocneetcogrvbgugnzinneg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009599.8039045-1430-81497857651530/AnsiballZ_systemd.py'
Nov 24 18:40:00 compute-0 sudo[239532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-776900ec71b72844c6bcb2fa95ca11e3833bee21a562c9b69d2b544c56ae8490-merged.mount: Deactivated successfully.
Nov 24 18:40:00 compute-0 python3.9[239535]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:40:00 compute-0 systemd[1]: Reloading.
Nov 24 18:40:00 compute-0 systemd-rc-local-generator[239561]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:40:00 compute-0 systemd-sysv-generator[239565]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:40:01 compute-0 ceph-mon[74927]: pgmap v723: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:01 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Nov 24 18:40:01 compute-0 sudo[239532]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:01 compute-0 podman[239332]: 2025-11-24 18:40:01.133243957 +0000 UTC m=+1.857834723 container remove c30801816852cb4a42db06425d28fa2c8b06d9eb23a8435fc3e7c7abf2a3a736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_moore, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:40:01 compute-0 systemd[1]: libpod-conmon-c30801816852cb4a42db06425d28fa2c8b06d9eb23a8435fc3e7c7abf2a3a736.scope: Deactivated successfully.
Nov 24 18:40:01 compute-0 podman[239630]: 2025-11-24 18:40:01.268364953 +0000 UTC m=+0.025477841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:40:01 compute-0 podman[239630]: 2025-11-24 18:40:01.373492156 +0000 UTC m=+0.130604994 container create c5d208456c463172646b85e188b37d5061841d531268e83e4a3f72ba1ae84a19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_feynman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 24 18:40:01 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v724: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:01 compute-0 sudo[239746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owufrqpstztexrhooinmjrqezusldcwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009601.2227242-1438-132782802822640/AnsiballZ_systemd.py'
Nov 24 18:40:01 compute-0 sudo[239746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:01 compute-0 systemd[1]: Started libpod-conmon-c5d208456c463172646b85e188b37d5061841d531268e83e4a3f72ba1ae84a19.scope.
Nov 24 18:40:01 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:40:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afff9de9e6ebc806885f760a22a5f358e5d1ef235662e30ced429a6afce0330e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:40:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afff9de9e6ebc806885f760a22a5f358e5d1ef235662e30ced429a6afce0330e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:40:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afff9de9e6ebc806885f760a22a5f358e5d1ef235662e30ced429a6afce0330e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:40:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afff9de9e6ebc806885f760a22a5f358e5d1ef235662e30ced429a6afce0330e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:40:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afff9de9e6ebc806885f760a22a5f358e5d1ef235662e30ced429a6afce0330e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:40:01 compute-0 podman[239630]: 2025-11-24 18:40:01.627822865 +0000 UTC m=+0.384935723 container init c5d208456c463172646b85e188b37d5061841d531268e83e4a3f72ba1ae84a19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_feynman, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 18:40:01 compute-0 podman[239630]: 2025-11-24 18:40:01.638200472 +0000 UTC m=+0.395313320 container start c5d208456c463172646b85e188b37d5061841d531268e83e4a3f72ba1ae84a19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 18:40:01 compute-0 podman[239630]: 2025-11-24 18:40:01.746082634 +0000 UTC m=+0.503195482 container attach c5d208456c463172646b85e188b37d5061841d531268e83e4a3f72ba1ae84a19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Nov 24 18:40:01 compute-0 python3.9[239750]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 24 18:40:01 compute-0 systemd[1]: Reloading.
Nov 24 18:40:01 compute-0 systemd-rc-local-generator[239784]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:40:01 compute-0 systemd-sysv-generator[239787]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:40:02 compute-0 systemd[1]: Reloading.
Nov 24 18:40:02 compute-0 systemd-sysv-generator[239826]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:40:02 compute-0 systemd-rc-local-generator[239822]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:40:02 compute-0 sudo[239746]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:02 compute-0 recursing_feynman[239751]: --> passed data devices: 0 physical, 3 LVM
Nov 24 18:40:02 compute-0 recursing_feynman[239751]: --> relative data size: 1.0
Nov 24 18:40:02 compute-0 recursing_feynman[239751]: --> All data devices are unavailable
Nov 24 18:40:02 compute-0 systemd[1]: libpod-c5d208456c463172646b85e188b37d5061841d531268e83e4a3f72ba1ae84a19.scope: Deactivated successfully.
Nov 24 18:40:02 compute-0 podman[239630]: 2025-11-24 18:40:02.611120688 +0000 UTC m=+1.368233516 container died c5d208456c463172646b85e188b37d5061841d531268e83e4a3f72ba1ae84a19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_feynman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 18:40:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:40:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-afff9de9e6ebc806885f760a22a5f358e5d1ef235662e30ced429a6afce0330e-merged.mount: Deactivated successfully.
Nov 24 18:40:02 compute-0 sshd-session[179880]: Connection closed by 192.168.122.30 port 51980
Nov 24 18:40:02 compute-0 sshd-session[179877]: pam_unix(sshd:session): session closed for user zuul
Nov 24 18:40:02 compute-0 systemd-logind[822]: Session 51 logged out. Waiting for processes to exit.
Nov 24 18:40:02 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Nov 24 18:40:02 compute-0 systemd[1]: session-51.scope: Consumed 3min 22.511s CPU time.
Nov 24 18:40:02 compute-0 systemd-logind[822]: Removed session 51.
Nov 24 18:40:03 compute-0 podman[239630]: 2025-11-24 18:40:03.012246123 +0000 UTC m=+1.769358951 container remove c5d208456c463172646b85e188b37d5061841d531268e83e4a3f72ba1ae84a19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_feynman, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:40:03 compute-0 systemd[1]: libpod-conmon-c5d208456c463172646b85e188b37d5061841d531268e83e4a3f72ba1ae84a19.scope: Deactivated successfully.
Nov 24 18:40:03 compute-0 sudo[239165]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:03 compute-0 sudo[239890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:40:03 compute-0 sudo[239890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:40:03 compute-0 sudo[239890]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:03 compute-0 ceph-mon[74927]: pgmap v724: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:03 compute-0 sudo[239915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:40:03 compute-0 sudo[239915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:40:03 compute-0 sudo[239915]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:03 compute-0 sudo[239940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:40:03 compute-0 sudo[239940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:40:03 compute-0 sudo[239940]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:03 compute-0 sudo[239965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- lvm list --format json
Nov 24 18:40:03 compute-0 sudo[239965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:40:03 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v725: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:03 compute-0 podman[240030]: 2025-11-24 18:40:03.698578641 +0000 UTC m=+0.020932410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:40:03 compute-0 podman[240030]: 2025-11-24 18:40:03.87704117 +0000 UTC m=+0.199394909 container create 13bd945d47e4d22a600cb5918d8ea600269311a0ed5d84409015d1ac460cef46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_heyrovsky, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 18:40:04 compute-0 systemd[1]: Started libpod-conmon-13bd945d47e4d22a600cb5918d8ea600269311a0ed5d84409015d1ac460cef46.scope.
Nov 24 18:40:04 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:40:04 compute-0 podman[240030]: 2025-11-24 18:40:04.257464552 +0000 UTC m=+0.579818311 container init 13bd945d47e4d22a600cb5918d8ea600269311a0ed5d84409015d1ac460cef46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_heyrovsky, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 18:40:04 compute-0 podman[240030]: 2025-11-24 18:40:04.27149999 +0000 UTC m=+0.593853769 container start 13bd945d47e4d22a600cb5918d8ea600269311a0ed5d84409015d1ac460cef46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_heyrovsky, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:40:04 compute-0 dazzling_heyrovsky[240046]: 167 167
Nov 24 18:40:04 compute-0 systemd[1]: libpod-13bd945d47e4d22a600cb5918d8ea600269311a0ed5d84409015d1ac460cef46.scope: Deactivated successfully.
Nov 24 18:40:04 compute-0 podman[240030]: 2025-11-24 18:40:04.394511266 +0000 UTC m=+0.716865035 container attach 13bd945d47e4d22a600cb5918d8ea600269311a0ed5d84409015d1ac460cef46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_heyrovsky, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:40:04 compute-0 podman[240030]: 2025-11-24 18:40:04.395455769 +0000 UTC m=+0.717809518 container died 13bd945d47e4d22a600cb5918d8ea600269311a0ed5d84409015d1ac460cef46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_heyrovsky, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:40:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-b189fd9bb3ee01b4e05bdf49ccfd62158243bc013ef9abc85b6cc4aa9b21f306-merged.mount: Deactivated successfully.
Nov 24 18:40:04 compute-0 podman[240030]: 2025-11-24 18:40:04.541013664 +0000 UTC m=+0.863367413 container remove 13bd945d47e4d22a600cb5918d8ea600269311a0ed5d84409015d1ac460cef46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 18:40:04 compute-0 systemd[1]: libpod-conmon-13bd945d47e4d22a600cb5918d8ea600269311a0ed5d84409015d1ac460cef46.scope: Deactivated successfully.
Nov 24 18:40:04 compute-0 podman[240070]: 2025-11-24 18:40:04.694290671 +0000 UTC m=+0.042996106 container create 14398979fa2f69be7c5c35715124d9286b1564bdaaf6ed2f0f8b0ca827940bd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:40:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:40:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:40:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:40:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:40:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:40:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:40:04 compute-0 systemd[1]: Started libpod-conmon-14398979fa2f69be7c5c35715124d9286b1564bdaaf6ed2f0f8b0ca827940bd1.scope.
Nov 24 18:40:04 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:40:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0209438eb7bda5bdc2b24957d2ca9d53381aeaa01063e22f5570b33db6575d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:40:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0209438eb7bda5bdc2b24957d2ca9d53381aeaa01063e22f5570b33db6575d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:40:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0209438eb7bda5bdc2b24957d2ca9d53381aeaa01063e22f5570b33db6575d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:40:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0209438eb7bda5bdc2b24957d2ca9d53381aeaa01063e22f5570b33db6575d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:40:04 compute-0 podman[240070]: 2025-11-24 18:40:04.673859065 +0000 UTC m=+0.022564530 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:40:04 compute-0 podman[240070]: 2025-11-24 18:40:04.778422334 +0000 UTC m=+0.127127789 container init 14398979fa2f69be7c5c35715124d9286b1564bdaaf6ed2f0f8b0ca827940bd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_feistel, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:40:04 compute-0 podman[240070]: 2025-11-24 18:40:04.78550952 +0000 UTC m=+0.134214955 container start 14398979fa2f69be7c5c35715124d9286b1564bdaaf6ed2f0f8b0ca827940bd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:40:04 compute-0 podman[240070]: 2025-11-24 18:40:04.790046682 +0000 UTC m=+0.138752147 container attach 14398979fa2f69be7c5c35715124d9286b1564bdaaf6ed2f0f8b0ca827940bd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:40:05 compute-0 ceph-mon[74927]: pgmap v725: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:05 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v726: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]: {
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:     "0": [
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:         {
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:             "devices": [
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:                 "/dev/loop3"
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:             ],
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:             "lv_name": "ceph_lv0",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:             "lv_size": "21470642176",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:             "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:             "name": "ceph_lv0",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:             "tags": {
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:                 "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:                 "ceph.cluster_name": "ceph",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:                 "ceph.crush_device_class": "",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:                 "ceph.encrypted": "0",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:                 "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:                 "ceph.osd_id": "0",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:                 "ceph.type": "block",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:                 "ceph.vdo": "0"
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:             },
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:             "type": "block",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:             "vg_name": "ceph_vg0"
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:         }
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:     ],
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:     "1": [
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:         {
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:             "devices": [
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:                 "/dev/loop4"
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:             ],
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:             "lv_name": "ceph_lv1",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:             "lv_size": "21470642176",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:             "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:             "name": "ceph_lv1",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:             "tags": {
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:                 "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:                 "ceph.cluster_name": "ceph",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:                 "ceph.crush_device_class": "",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:                 "ceph.encrypted": "0",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:                 "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:                 "ceph.osd_id": "1",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:                 "ceph.type": "block",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:                 "ceph.vdo": "0"
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:             },
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:             "type": "block",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:             "vg_name": "ceph_vg1"
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:         }
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:     ],
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:     "2": [
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:         {
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:             "devices": [
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:                 "/dev/loop5"
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:             ],
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:             "lv_name": "ceph_lv2",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:             "lv_size": "21470642176",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:             "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:             "name": "ceph_lv2",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:             "tags": {
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:                 "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:                 "ceph.cluster_name": "ceph",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:                 "ceph.crush_device_class": "",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:                 "ceph.encrypted": "0",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:                 "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:                 "ceph.osd_id": "2",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:                 "ceph.type": "block",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:                 "ceph.vdo": "0"
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:             },
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:             "type": "block",
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:             "vg_name": "ceph_vg2"
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:         }
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]:     ]
Nov 24 18:40:05 compute-0 mystifying_feistel[240086]: }
Nov 24 18:40:05 compute-0 systemd[1]: libpod-14398979fa2f69be7c5c35715124d9286b1564bdaaf6ed2f0f8b0ca827940bd1.scope: Deactivated successfully.
Nov 24 18:40:05 compute-0 conmon[240086]: conmon 14398979fa2f69be7c5c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-14398979fa2f69be7c5c35715124d9286b1564bdaaf6ed2f0f8b0ca827940bd1.scope/container/memory.events
Nov 24 18:40:05 compute-0 podman[240070]: 2025-11-24 18:40:05.567197209 +0000 UTC m=+0.915902644 container died 14398979fa2f69be7c5c35715124d9286b1564bdaaf6ed2f0f8b0ca827940bd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 18:40:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca0209438eb7bda5bdc2b24957d2ca9d53381aeaa01063e22f5570b33db6575d-merged.mount: Deactivated successfully.
Nov 24 18:40:05 compute-0 podman[240070]: 2025-11-24 18:40:05.628845555 +0000 UTC m=+0.977550990 container remove 14398979fa2f69be7c5c35715124d9286b1564bdaaf6ed2f0f8b0ca827940bd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_feistel, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:40:05 compute-0 systemd[1]: libpod-conmon-14398979fa2f69be7c5c35715124d9286b1564bdaaf6ed2f0f8b0ca827940bd1.scope: Deactivated successfully.
Nov 24 18:40:05 compute-0 sudo[239965]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:05 compute-0 sudo[240106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:40:05 compute-0 sudo[240106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:40:05 compute-0 sudo[240106]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:05 compute-0 sudo[240131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:40:05 compute-0 sudo[240131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:40:05 compute-0 sudo[240131]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:05 compute-0 sudo[240156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:40:05 compute-0 sudo[240156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:40:05 compute-0 sudo[240156]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:05 compute-0 sudo[240181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- raw list --format json
Nov 24 18:40:05 compute-0 sudo[240181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:40:06 compute-0 podman[240246]: 2025-11-24 18:40:06.261985176 +0000 UTC m=+0.055240149 container create 465d93d32715f3d949ebb89dbd23170baf5936611f6c48c89af14b4ecfadd256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hopper, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:40:06 compute-0 systemd[1]: Started libpod-conmon-465d93d32715f3d949ebb89dbd23170baf5936611f6c48c89af14b4ecfadd256.scope.
Nov 24 18:40:06 compute-0 podman[240246]: 2025-11-24 18:40:06.234772292 +0000 UTC m=+0.028027355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:40:06 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:40:06 compute-0 podman[240246]: 2025-11-24 18:40:06.35742929 +0000 UTC m=+0.150684283 container init 465d93d32715f3d949ebb89dbd23170baf5936611f6c48c89af14b4ecfadd256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hopper, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 18:40:06 compute-0 podman[240246]: 2025-11-24 18:40:06.364527866 +0000 UTC m=+0.157782839 container start 465d93d32715f3d949ebb89dbd23170baf5936611f6c48c89af14b4ecfadd256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hopper, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:40:06 compute-0 podman[240246]: 2025-11-24 18:40:06.367680404 +0000 UTC m=+0.160935407 container attach 465d93d32715f3d949ebb89dbd23170baf5936611f6c48c89af14b4ecfadd256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 18:40:06 compute-0 sleepy_hopper[240263]: 167 167
Nov 24 18:40:06 compute-0 systemd[1]: libpod-465d93d32715f3d949ebb89dbd23170baf5936611f6c48c89af14b4ecfadd256.scope: Deactivated successfully.
Nov 24 18:40:06 compute-0 podman[240246]: 2025-11-24 18:40:06.371506279 +0000 UTC m=+0.164761272 container died 465d93d32715f3d949ebb89dbd23170baf5936611f6c48c89af14b4ecfadd256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:40:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fc3b6702da7f8cdb558a07fab46ec086e60cc4625d538f6fea3e850824d4216-merged.mount: Deactivated successfully.
Nov 24 18:40:06 compute-0 podman[240246]: 2025-11-24 18:40:06.409258284 +0000 UTC m=+0.202513257 container remove 465d93d32715f3d949ebb89dbd23170baf5936611f6c48c89af14b4ecfadd256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:40:06 compute-0 systemd[1]: libpod-conmon-465d93d32715f3d949ebb89dbd23170baf5936611f6c48c89af14b4ecfadd256.scope: Deactivated successfully.
Nov 24 18:40:06 compute-0 podman[240288]: 2025-11-24 18:40:06.578536596 +0000 UTC m=+0.043964140 container create 244380a20176ca94e3da1d51b504a2f30c89326d76f865e40563345cb8178fda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_rhodes, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 18:40:06 compute-0 systemd[1]: Started libpod-conmon-244380a20176ca94e3da1d51b504a2f30c89326d76f865e40563345cb8178fda.scope.
Nov 24 18:40:06 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:40:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64944f7e89ddc3ac30e497c04f0206d45ab8314b07757b7c2f3ae91821f413ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:40:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64944f7e89ddc3ac30e497c04f0206d45ab8314b07757b7c2f3ae91821f413ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:40:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64944f7e89ddc3ac30e497c04f0206d45ab8314b07757b7c2f3ae91821f413ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:40:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64944f7e89ddc3ac30e497c04f0206d45ab8314b07757b7c2f3ae91821f413ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:40:06 compute-0 podman[240288]: 2025-11-24 18:40:06.651686478 +0000 UTC m=+0.117114042 container init 244380a20176ca94e3da1d51b504a2f30c89326d76f865e40563345cb8178fda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:40:06 compute-0 podman[240288]: 2025-11-24 18:40:06.561256568 +0000 UTC m=+0.026684132 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:40:06 compute-0 podman[240288]: 2025-11-24 18:40:06.663394358 +0000 UTC m=+0.128821902 container start 244380a20176ca94e3da1d51b504a2f30c89326d76f865e40563345cb8178fda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:40:06 compute-0 podman[240288]: 2025-11-24 18:40:06.666661239 +0000 UTC m=+0.132088833 container attach 244380a20176ca94e3da1d51b504a2f30c89326d76f865e40563345cb8178fda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 18:40:07 compute-0 ceph-mon[74927]: pgmap v726: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:07 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v727: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:07 compute-0 angry_rhodes[240305]: {
Nov 24 18:40:07 compute-0 angry_rhodes[240305]:     "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 18:40:07 compute-0 angry_rhodes[240305]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:40:07 compute-0 angry_rhodes[240305]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 18:40:07 compute-0 angry_rhodes[240305]:         "osd_id": 0,
Nov 24 18:40:07 compute-0 angry_rhodes[240305]:         "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:40:07 compute-0 angry_rhodes[240305]:         "type": "bluestore"
Nov 24 18:40:07 compute-0 angry_rhodes[240305]:     },
Nov 24 18:40:07 compute-0 angry_rhodes[240305]:     "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 18:40:07 compute-0 angry_rhodes[240305]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:40:07 compute-0 angry_rhodes[240305]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 18:40:07 compute-0 angry_rhodes[240305]:         "osd_id": 1,
Nov 24 18:40:07 compute-0 angry_rhodes[240305]:         "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:40:07 compute-0 angry_rhodes[240305]:         "type": "bluestore"
Nov 24 18:40:07 compute-0 angry_rhodes[240305]:     },
Nov 24 18:40:07 compute-0 angry_rhodes[240305]:     "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 18:40:07 compute-0 angry_rhodes[240305]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:40:07 compute-0 angry_rhodes[240305]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 18:40:07 compute-0 angry_rhodes[240305]:         "osd_id": 2,
Nov 24 18:40:07 compute-0 angry_rhodes[240305]:         "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:40:07 compute-0 angry_rhodes[240305]:         "type": "bluestore"
Nov 24 18:40:07 compute-0 angry_rhodes[240305]:     }
Nov 24 18:40:07 compute-0 angry_rhodes[240305]: }
Nov 24 18:40:07 compute-0 systemd[1]: libpod-244380a20176ca94e3da1d51b504a2f30c89326d76f865e40563345cb8178fda.scope: Deactivated successfully.
Nov 24 18:40:07 compute-0 systemd[1]: libpod-244380a20176ca94e3da1d51b504a2f30c89326d76f865e40563345cb8178fda.scope: Consumed 1.022s CPU time.
Nov 24 18:40:07 compute-0 podman[240288]: 2025-11-24 18:40:07.682025086 +0000 UTC m=+1.147452670 container died 244380a20176ca94e3da1d51b504a2f30c89326d76f865e40563345cb8178fda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:40:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-64944f7e89ddc3ac30e497c04f0206d45ab8314b07757b7c2f3ae91821f413ac-merged.mount: Deactivated successfully.
Nov 24 18:40:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:40:07 compute-0 podman[240288]: 2025-11-24 18:40:07.745337444 +0000 UTC m=+1.210764998 container remove 244380a20176ca94e3da1d51b504a2f30c89326d76f865e40563345cb8178fda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_rhodes, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 18:40:07 compute-0 systemd[1]: libpod-conmon-244380a20176ca94e3da1d51b504a2f30c89326d76f865e40563345cb8178fda.scope: Deactivated successfully.
Nov 24 18:40:07 compute-0 sudo[240181]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:40:07 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:40:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:40:07 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:40:07 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev f2a13081-88e1-4ce6-823f-baba801d5119 does not exist
Nov 24 18:40:07 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev c6fe55c8-9ddd-4685-93ac-f0de07ed5357 does not exist
Nov 24 18:40:07 compute-0 sudo[240352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:40:07 compute-0 sudo[240352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:40:07 compute-0 sudo[240352]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:07 compute-0 sudo[240377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:40:07 compute-0 sudo[240377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:40:07 compute-0 sudo[240377]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:08 compute-0 sshd-session[240402]: Accepted publickey for zuul from 192.168.122.30 port 48888 ssh2: ECDSA SHA256:jrbRhTO0vQ5011EeK1ZbrK2vK+fcQyIDL9kUBqDHBYY
Nov 24 18:40:08 compute-0 systemd-logind[822]: New session 52 of user zuul.
Nov 24 18:40:08 compute-0 systemd[1]: Started Session 52 of User zuul.
Nov 24 18:40:08 compute-0 sshd-session[240402]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 18:40:08 compute-0 ceph-mon[74927]: pgmap v727: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:08 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:40:08 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:40:09 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v728: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:09 compute-0 python3.9[240555]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:40:10 compute-0 python3.9[240709]: ansible-ansible.builtin.service_facts Invoked
Nov 24 18:40:10 compute-0 network[240726]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 18:40:10 compute-0 network[240727]: 'network-scripts' will be removed from distribution in near future.
Nov 24 18:40:10 compute-0 network[240728]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 18:40:11 compute-0 ceph-mon[74927]: pgmap v728: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:11 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v729: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:40:13 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v730: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:13 compute-0 ceph-mon[74927]: pgmap v729: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:14 compute-0 ceph-mon[74927]: pgmap v730: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:15 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v731: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:15 compute-0 sudo[240998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymfpggbnwijgoslfyjfkgggswvglmrav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009615.4058044-47-102971013894050/AnsiballZ_setup.py'
Nov 24 18:40:15 compute-0 sudo[240998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:15 compute-0 python3.9[241000]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 18:40:16 compute-0 sudo[240998]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:16 compute-0 sudo[241082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbkacbpiusugtspdskxokezwsytczxdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009615.4058044-47-102971013894050/AnsiballZ_dnf.py'
Nov 24 18:40:16 compute-0 sudo[241082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:16 compute-0 python3.9[241084]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 18:40:17 compute-0 ceph-mon[74927]: pgmap v731: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:17 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v732: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:40:19 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v733: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:19 compute-0 ceph-mon[74927]: pgmap v732: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:21 compute-0 ceph-mon[74927]: pgmap v733: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:21 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v734: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:22 compute-0 podman[241086]: 2025-11-24 18:40:22.054834169 +0000 UTC m=+0.140203434 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 24 18:40:22 compute-0 ceph-mon[74927]: pgmap v734: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:40:22.732 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:40:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:40:22.732 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:40:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:40:22.732 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:40:22 compute-0 sudo[241082]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:40:23 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v735: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:23 compute-0 sudo[241261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aocpupfwmwrsmyexbzxrrqoaouyypukq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009622.9612978-59-185508072040286/AnsiballZ_stat.py'
Nov 24 18:40:23 compute-0 sudo[241261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:23 compute-0 python3.9[241263]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:40:23 compute-0 sudo[241261]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:24 compute-0 sudo[241413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpejjhctwlbzxygenzhbkcmjlzbkchnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009623.8774374-69-248932022227373/AnsiballZ_command.py'
Nov 24 18:40:24 compute-0 sudo[241413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:24 compute-0 podman[241415]: 2025-11-24 18:40:24.516737791 +0000 UTC m=+0.110803176 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 24 18:40:24 compute-0 python3.9[241416]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:40:24 compute-0 sudo[241413]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:24 compute-0 ceph-mon[74927]: pgmap v735: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:25 compute-0 sudo[241585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onhiqufvweomubddqwatwtkrurqtufdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009624.8455656-79-94329706648692/AnsiballZ_stat.py'
Nov 24 18:40:25 compute-0 sudo[241585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:25 compute-0 python3.9[241587]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:40:25 compute-0 sudo[241585]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:25 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v736: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:25 compute-0 sudo[241737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hszwycaxndjsrnowzczcqcchoosjfsyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009625.5278168-87-253576295923212/AnsiballZ_command.py'
Nov 24 18:40:25 compute-0 sudo[241737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:26 compute-0 python3.9[241739]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:40:26 compute-0 sudo[241737]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:26 compute-0 sudo[241890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-negaglkgbbabxraugztxsvoezvzqzbjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009626.2239416-95-57991512509741/AnsiballZ_stat.py'
Nov 24 18:40:26 compute-0 sudo[241890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:26 compute-0 python3.9[241892]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:40:26 compute-0 sudo[241890]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:27 compute-0 ceph-mon[74927]: pgmap v736: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:27 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v737: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:27 compute-0 sudo[242013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfyohqzofugfuiyesxwioacmlanvapfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009626.2239416-95-57991512509741/AnsiballZ_copy.py'
Nov 24 18:40:27 compute-0 sudo[242013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:27 compute-0 python3.9[242015]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764009626.2239416-95-57991512509741/.source.iscsi _original_basename=.8itx7q88 follow=False checksum=dd0d0d208ed07e6a1ac6c580acc057f9dc4e2fc0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:40:27 compute-0 sudo[242013]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:40:28 compute-0 sudo[242165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oeylksxdlbhatknibuqiuainlzmkduqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009627.838646-110-44812529221364/AnsiballZ_file.py'
Nov 24 18:40:28 compute-0 sudo[242165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:28 compute-0 python3.9[242167]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:40:28 compute-0 sudo[242165]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:29 compute-0 sudo[242317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvjhcazqpmhlfdfcswzpxystjnqizssu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009628.8154225-118-185523261752371/AnsiballZ_lineinfile.py'
Nov 24 18:40:29 compute-0 sudo[242317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:29 compute-0 ceph-mon[74927]: pgmap v737: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:29 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v738: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:29 compute-0 python3.9[242319]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:40:29 compute-0 sudo[242317]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:30 compute-0 ceph-mon[74927]: pgmap v738: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:30 compute-0 sudo[242469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yacinyrfvtrsornnielctmicrmwrdyru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009630.0488167-127-46180221283331/AnsiballZ_systemd_service.py'
Nov 24 18:40:30 compute-0 sudo[242469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:31 compute-0 python3.9[242471]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:40:31 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 24 18:40:31 compute-0 sudo[242469]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:31 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v739: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:31 compute-0 sudo[242625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqnipdobwbiaozlxohguxxafhmccqubb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009631.3324606-135-16028747362242/AnsiballZ_systemd_service.py'
Nov 24 18:40:31 compute-0 sudo[242625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:31 compute-0 python3.9[242627]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:40:32 compute-0 systemd[1]: Reloading.
Nov 24 18:40:32 compute-0 systemd-rc-local-generator[242657]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:40:32 compute-0 systemd-sysv-generator[242660]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:40:32 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 24 18:40:32 compute-0 systemd[1]: Starting Open-iSCSI...
Nov 24 18:40:32 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Nov 24 18:40:32 compute-0 systemd[1]: Started Open-iSCSI.
Nov 24 18:40:32 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 24 18:40:32 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 24 18:40:32 compute-0 sudo[242625]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:32 compute-0 ceph-mon[74927]: pgmap v739: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:32 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:40:33 compute-0 sudo[242827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgzqrrmbsdicsaaresnptgkynhfwkowd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009632.8308263-146-252184367347264/AnsiballZ_service_facts.py'
Nov 24 18:40:33 compute-0 sudo[242827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:33 compute-0 python3.9[242829]: ansible-ansible.builtin.service_facts Invoked
Nov 24 18:40:33 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v740: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:33 compute-0 network[242846]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 18:40:33 compute-0 network[242847]: 'network-scripts' will be removed from distribution in near future.
Nov 24 18:40:33 compute-0 network[242848]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 18:40:33 compute-0 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 18:40:33 compute-0 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Cumulative writes: 5582 writes, 23K keys, 5582 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5582 writes, 857 syncs, 6.51 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 18:40:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:40:34
Nov 24 18:40:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:40:34 compute-0 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 18:40:34 compute-0 ceph-mgr[75218]: [balancer INFO root] pools ['vms', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', '.rgw.root', 'default.rgw.log', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes']
Nov 24 18:40:34 compute-0 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 18:40:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:40:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:40:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:40:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:40:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:40:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:40:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:40:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:40:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:40:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:40:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:40:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:40:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:40:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:40:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:40:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:40:35 compute-0 ceph-mon[74927]: pgmap v740: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:35 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v741: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:37 compute-0 ceph-mon[74927]: pgmap v741: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:37 compute-0 sudo[242827]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:37 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v742: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:40:37 compute-0 sudo[243118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvtqmhouzqpjbobstkroxmuxjmwgfqqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009637.68817-156-18915141035426/AnsiballZ_file.py'
Nov 24 18:40:37 compute-0 sudo[243118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:38 compute-0 python3.9[243120]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 24 18:40:38 compute-0 sudo[243118]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:38 compute-0 sudo[243270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yndtpkdbkjgtcyxboluqtrzfsgopufmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009638.2986045-164-157323347396337/AnsiballZ_modprobe.py'
Nov 24 18:40:38 compute-0 sudo[243270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:38 compute-0 python3.9[243272]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 24 18:40:38 compute-0 sudo[243270]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:39 compute-0 ceph-mon[74927]: pgmap v742: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:39 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v743: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:39 compute-0 sudo[243426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igpmtqvaogdvplyceaxzfsytwqklpizb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009639.1811097-172-248901092692047/AnsiballZ_stat.py'
Nov 24 18:40:39 compute-0 sudo[243426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:39 compute-0 python3.9[243428]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:40:39 compute-0 sudo[243426]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:39 compute-0 sudo[243549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktkqwjquudvskxuwysqoqsfbvredrgpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009639.1811097-172-248901092692047/AnsiballZ_copy.py'
Nov 24 18:40:39 compute-0 sudo[243549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:40 compute-0 python3.9[243551]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764009639.1811097-172-248901092692047/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:40:40 compute-0 sudo[243549]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:40 compute-0 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 18:40:40 compute-0 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Cumulative writes: 6685 writes, 27K keys, 6685 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 6685 writes, 1209 syncs, 5.53 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 271 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.365       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.365       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.36              0.00         1    0.365       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.055       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.055       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.055       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.10              0.00         1    0.095       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.10              0.00         1    0.095       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.10              0.00         1    0.095       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 18:40:40 compute-0 sudo[243701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogjyvpbmzcuzahhpnptemoltsosqwwhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009640.433679-188-119167116738466/AnsiballZ_lineinfile.py'
Nov 24 18:40:40 compute-0 sudo[243701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:40 compute-0 python3.9[243703]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:40:40 compute-0 sudo[243701]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:41 compute-0 ceph-mon[74927]: pgmap v743: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:41 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v744: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:41 compute-0 sudo[243853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvjearibpntfahjjqhmxiuacxotdrmlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009641.0617814-196-205952695147648/AnsiballZ_systemd.py'
Nov 24 18:40:41 compute-0 sudo[243853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:41 compute-0 python3.9[243855]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 18:40:42 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 24 18:40:42 compute-0 systemd[1]: Stopped Load Kernel Modules.
Nov 24 18:40:42 compute-0 systemd[1]: Stopping Load Kernel Modules...
Nov 24 18:40:42 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 24 18:40:42 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 24 18:40:42 compute-0 sudo[243853]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:42 compute-0 sudo[244010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vycnqvmvtmiuiaxervwlgvqtmtnocurt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009642.297773-204-164549804238235/AnsiballZ_file.py'
Nov 24 18:40:42 compute-0 sudo[244010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:40:42 compute-0 python3.9[244012]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:40:42 compute-0 sudo[244010]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:43 compute-0 ceph-mon[74927]: pgmap v744: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:40:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:40:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:40:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:40:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:40:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:40:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:40:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:40:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:40:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:40:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:40:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:40:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 18:40:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:40:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:40:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:40:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 18:40:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:40:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 18:40:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:40:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:40:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:40:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 18:40:43 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v745: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:43 compute-0 sudo[244162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcskrlppdebxbgdtyphzbxfmipxzymfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009643.2266681-213-241374365567296/AnsiballZ_stat.py'
Nov 24 18:40:43 compute-0 sudo[244162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:43 compute-0 python3.9[244164]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:40:43 compute-0 sudo[244162]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:44 compute-0 sudo[244314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abfisvumdwoquivknvgydvmglebpvjgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009643.9752576-222-215421931860103/AnsiballZ_stat.py'
Nov 24 18:40:44 compute-0 sudo[244314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:44 compute-0 python3.9[244316]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:40:44 compute-0 sudo[244314]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:45 compute-0 sudo[244466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sklbuywxxyomkfejnpxqqakpthnybhab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009644.7401109-230-95481122729426/AnsiballZ_stat.py'
Nov 24 18:40:45 compute-0 sudo[244466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:45 compute-0 ceph-mon[74927]: pgmap v745: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:45 compute-0 python3.9[244468]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:40:45 compute-0 sudo[244466]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:45 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v746: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:45 compute-0 sudo[244589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agywglowissseznhlqrvuurnalqcakep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009644.7401109-230-95481122729426/AnsiballZ_copy.py'
Nov 24 18:40:45 compute-0 sudo[244589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:45 compute-0 python3.9[244591]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764009644.7401109-230-95481122729426/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:40:45 compute-0 sudo[244589]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:46 compute-0 sudo[244741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrfysthcinisosmfhmkuxryngocxamof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009646.0742352-245-202084993983979/AnsiballZ_command.py'
Nov 24 18:40:46 compute-0 sudo[244741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:46 compute-0 python3.9[244743]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:40:46 compute-0 sudo[244741]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:47 compute-0 ceph-mon[74927]: pgmap v746: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:47 compute-0 sudo[244894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-biknaodkrsmdzpbjaqbboapueauxoxhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009646.9602585-253-217101484144672/AnsiballZ_lineinfile.py'
Nov 24 18:40:47 compute-0 sudo[244894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:47 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v747: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:47 compute-0 python3.9[244896]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:40:47 compute-0 sudo[244894]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:40:47 compute-0 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 18:40:47 compute-0 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Cumulative writes: 5662 writes, 23K keys, 5662 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5662 writes, 859 syncs, 6.59 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.03              0.00         1    0.027       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92a430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92a430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92a430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.019       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 18:40:48 compute-0 sudo[245046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azriznuiijelekxhhwadxmefdkzyheop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009647.7717383-261-195849151012558/AnsiballZ_replace.py'
Nov 24 18:40:48 compute-0 sudo[245046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:48 compute-0 python3.9[245048]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:40:48 compute-0 sudo[245046]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:48 compute-0 sudo[245198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmarsvuhsfzbpqvaurmnymknmgtvsnny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009648.6995497-269-91757911738244/AnsiballZ_replace.py'
Nov 24 18:40:48 compute-0 sudo[245198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:49 compute-0 ceph-mon[74927]: pgmap v747: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:49 compute-0 python3.9[245200]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:40:49 compute-0 sudo[245198]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:49 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v748: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:49 compute-0 sudo[245350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqefpbeihozgjvnakvftprzjnmdtbqvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009649.4630864-278-197317270613487/AnsiballZ_lineinfile.py'
Nov 24 18:40:49 compute-0 sudo[245350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:49 compute-0 python3.9[245352]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:40:50 compute-0 sudo[245350]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:50 compute-0 sudo[245502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkijsvgwpxuacovzacsdgmjalnzktmdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009650.1804502-278-233922596480531/AnsiballZ_lineinfile.py'
Nov 24 18:40:50 compute-0 sudo[245502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:50 compute-0 ceph-mgr[75218]: [devicehealth INFO root] Check health
Nov 24 18:40:50 compute-0 python3.9[245504]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:40:50 compute-0 sudo[245502]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:51 compute-0 ceph-mon[74927]: pgmap v748: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:51 compute-0 sudo[245654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hocncvxqyuaxyklnnjztccysdzaqqelq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009650.9303465-278-96703292424950/AnsiballZ_lineinfile.py'
Nov 24 18:40:51 compute-0 sudo[245654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:51 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v749: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:51 compute-0 python3.9[245656]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:40:51 compute-0 sudo[245654]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:51 compute-0 sudo[245806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhzkiasydxleclfkblkifqoihdnekicf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009651.6369362-278-208023162736109/AnsiballZ_lineinfile.py'
Nov 24 18:40:51 compute-0 sudo[245806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:52 compute-0 python3.9[245808]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:40:52 compute-0 sudo[245806]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:40:52 compute-0 sudo[245968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcxsdagguyuxvehqwebqsrqswmiijfjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009652.4489515-307-27758332607643/AnsiballZ_stat.py'
Nov 24 18:40:52 compute-0 sudo[245968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:52 compute-0 podman[245932]: 2025-11-24 18:40:52.900721118 +0000 UTC m=+0.140406838 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 18:40:53 compute-0 python3.9[245970]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:40:53 compute-0 sudo[245968]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:53 compute-0 ceph-mon[74927]: pgmap v749: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:53 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v750: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:53 compute-0 sudo[246135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-togoymjwjmpetftotcevjinkmpdjhdmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009653.2962165-315-26102200933104/AnsiballZ_file.py'
Nov 24 18:40:53 compute-0 sudo[246135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:53 compute-0 python3.9[246137]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:40:53 compute-0 sudo[246135]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:54 compute-0 sudo[246287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smoglzcyzanvvwtwjhahktcyotwzvjmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009654.0866637-324-77030286749737/AnsiballZ_file.py'
Nov 24 18:40:54 compute-0 sudo[246287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:54 compute-0 python3.9[246289]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:40:54 compute-0 sudo[246287]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:54 compute-0 podman[246366]: 2025-11-24 18:40:54.959708261 +0000 UTC m=+0.048690955 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 18:40:55 compute-0 sudo[246455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kemhzfcukloptrmwpiahkmmpvoyttgiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009654.799098-332-92158581589644/AnsiballZ_stat.py'
Nov 24 18:40:55 compute-0 sudo[246455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:55 compute-0 ceph-mon[74927]: pgmap v750: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:55 compute-0 python3.9[246457]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:40:55 compute-0 sudo[246455]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:55 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v751: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:55 compute-0 sudo[246533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrtcduthtvpecqayckyqdxrzolpjbvii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009654.799098-332-92158581589644/AnsiballZ_file.py'
Nov 24 18:40:55 compute-0 sudo[246533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:55 compute-0 python3.9[246535]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:40:55 compute-0 sudo[246533]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:56 compute-0 sudo[246685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsjcyjfdjbcplmqzckljdneksewesgnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009655.9390512-332-260544292218788/AnsiballZ_stat.py'
Nov 24 18:40:56 compute-0 sudo[246685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:56 compute-0 python3.9[246687]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:40:56 compute-0 sudo[246685]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:56 compute-0 sudo[246763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykvdigwatkjkpkqiotdjyhgqkziodgxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009655.9390512-332-260544292218788/AnsiballZ_file.py'
Nov 24 18:40:56 compute-0 sudo[246763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:56 compute-0 python3.9[246765]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:40:56 compute-0 sudo[246763]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:57 compute-0 ceph-mon[74927]: pgmap v751: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:57 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v752: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:57 compute-0 sudo[246915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdqplpbkekovhwapdavirwyafzwsuxwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009657.0749514-355-116890087954916/AnsiballZ_file.py'
Nov 24 18:40:57 compute-0 sudo[246915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:57 compute-0 python3.9[246917]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:40:57 compute-0 sudo[246915]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:40:58 compute-0 sudo[247067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtebznkodjlelirjatzyovfzmtrrpgvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009657.9383643-363-202579226581128/AnsiballZ_stat.py'
Nov 24 18:40:58 compute-0 sudo[247067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:58 compute-0 python3.9[247069]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:40:58 compute-0 sudo[247067]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:58 compute-0 sudo[247145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-natecogihvpvyssrwzggqjczlrisadrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009657.9383643-363-202579226581128/AnsiballZ_file.py'
Nov 24 18:40:58 compute-0 sudo[247145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:58 compute-0 python3.9[247147]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:40:58 compute-0 sudo[247145]: pam_unix(sudo:session): session closed for user root
Nov 24 18:40:59 compute-0 ceph-mon[74927]: pgmap v752: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:59 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v753: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:40:59 compute-0 sudo[247297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bucaovdionexgyzmzrbdpfwoqiwphlzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009659.3760648-375-139037405457811/AnsiballZ_stat.py'
Nov 24 18:40:59 compute-0 sudo[247297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:40:59 compute-0 python3.9[247299]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:40:59 compute-0 sudo[247297]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:00 compute-0 sudo[247375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yygekrtgpozbcgwciqvhipakrpyrgbsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009659.3760648-375-139037405457811/AnsiballZ_file.py'
Nov 24 18:41:00 compute-0 sudo[247375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:00 compute-0 python3.9[247377]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:41:00 compute-0 sudo[247375]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:00 compute-0 sudo[247527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akxeqzlrqhzaskwtqukqdjbbgtzfxhtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009660.6216795-387-120520634876984/AnsiballZ_systemd.py'
Nov 24 18:41:00 compute-0 sudo[247527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:01 compute-0 python3.9[247529]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:41:01 compute-0 systemd[1]: Reloading.
Nov 24 18:41:01 compute-0 ceph-mon[74927]: pgmap v753: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:01 compute-0 systemd-rc-local-generator[247557]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:41:01 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v754: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:01 compute-0 systemd-sysv-generator[247560]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:41:01 compute-0 sudo[247527]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:02 compute-0 sudo[247716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfqydxkjpvvxypxeyagcvhbhcfsahmrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009661.9684613-395-266835522715966/AnsiballZ_stat.py'
Nov 24 18:41:02 compute-0 sudo[247716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:02 compute-0 python3.9[247718]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:41:02 compute-0 sudo[247716]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:41:02 compute-0 sudo[247794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfdwsxnoweuwqirakuphfsjrompnjomn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009661.9684613-395-266835522715966/AnsiballZ_file.py'
Nov 24 18:41:02 compute-0 sudo[247794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:03 compute-0 python3.9[247796]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:41:03 compute-0 sudo[247794]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:03 compute-0 ceph-mon[74927]: pgmap v754: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:03 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v755: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:03 compute-0 sudo[247946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tajpitywqgmdrogmyjytprbtfftxxlxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009663.3626406-407-266671034360518/AnsiballZ_stat.py'
Nov 24 18:41:03 compute-0 sudo[247946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:03 compute-0 python3.9[247948]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:41:03 compute-0 sudo[247946]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:04 compute-0 sudo[248024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwatflzobgqtjthlzeurfsipxbbbxoui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009663.3626406-407-266671034360518/AnsiballZ_file.py'
Nov 24 18:41:04 compute-0 sudo[248024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:04 compute-0 python3.9[248026]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:41:04 compute-0 sudo[248024]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:41:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:41:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:41:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:41:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:41:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:41:04 compute-0 sudo[248176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqvvmrlgeynywunzjgrydhlfsevhvczg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009664.4702165-419-88833252844311/AnsiballZ_systemd.py'
Nov 24 18:41:04 compute-0 sudo[248176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:05 compute-0 python3.9[248178]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:41:05 compute-0 systemd[1]: Reloading.
Nov 24 18:41:05 compute-0 systemd-rc-local-generator[248203]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:41:05 compute-0 systemd-sysv-generator[248206]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:41:05 compute-0 ceph-mon[74927]: pgmap v755: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:05 compute-0 systemd[1]: Starting Create netns directory...
Nov 24 18:41:05 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 24 18:41:05 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 24 18:41:05 compute-0 systemd[1]: Finished Create netns directory.
Nov 24 18:41:05 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v756: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:05 compute-0 sudo[248176]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:05 compute-0 sudo[248369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxahzrgzmnmzjdtzafcnwyakkxyefiyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009665.7749867-429-31393250877118/AnsiballZ_file.py'
Nov 24 18:41:06 compute-0 sudo[248369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:06 compute-0 python3.9[248371]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:41:06 compute-0 sudo[248369]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:06 compute-0 sudo[248521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqofmaiaxpbjbqkbxwgoiaehiejdtdls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009666.3559847-437-234106034483248/AnsiballZ_stat.py'
Nov 24 18:41:06 compute-0 sudo[248521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:06 compute-0 python3.9[248523]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:41:06 compute-0 sudo[248521]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:07 compute-0 sudo[248644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rircxznhnpdfvdoxqxwfqjnbcyfhrhtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009666.3559847-437-234106034483248/AnsiballZ_copy.py'
Nov 24 18:41:07 compute-0 sudo[248644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:07 compute-0 ceph-mon[74927]: pgmap v756: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:07 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v757: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:07 compute-0 python3.9[248646]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764009666.3559847-437-234106034483248/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:41:07 compute-0 sudo[248644]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:41:08 compute-0 sudo[248744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:41:08 compute-0 sudo[248744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:41:08 compute-0 sudo[248744]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:08 compute-0 sudo[248785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:41:08 compute-0 sudo[248785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:41:08 compute-0 sudo[248785]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:08 compute-0 sudo[248866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdfxzgaagptvwqustqvdwxcvktqdbytt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009667.8581443-454-270468155167886/AnsiballZ_file.py'
Nov 24 18:41:08 compute-0 sudo[248866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:08 compute-0 sudo[248828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:41:08 compute-0 sudo[248828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:41:08 compute-0 sudo[248828]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:08 compute-0 sudo[248874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 18:41:08 compute-0 sudo[248874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:41:08 compute-0 python3.9[248871]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:41:08 compute-0 sudo[248866]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:08 compute-0 ceph-mon[74927]: pgmap v757: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:08 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Nov 24 18:41:08 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:41:08.425206) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 18:41:08 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Nov 24 18:41:08 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009668425284, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1573, "num_deletes": 251, "total_data_size": 2606937, "memory_usage": 2647472, "flush_reason": "Manual Compaction"}
Nov 24 18:41:08 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Nov 24 18:41:08 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009668442736, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2561561, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14760, "largest_seqno": 16332, "table_properties": {"data_size": 2554181, "index_size": 4387, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14574, "raw_average_key_size": 19, "raw_value_size": 2539603, "raw_average_value_size": 3418, "num_data_blocks": 201, "num_entries": 743, "num_filter_entries": 743, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764009493, "oldest_key_time": 1764009493, "file_creation_time": 1764009668, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 24 18:41:08 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 17565 microseconds, and 10523 cpu microseconds.
Nov 24 18:41:08 compute-0 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 18:41:08 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:41:08.442787) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2561561 bytes OK
Nov 24 18:41:08 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:41:08.442808) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Nov 24 18:41:08 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:41:08.444118) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Nov 24 18:41:08 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:41:08.444135) EVENT_LOG_v1 {"time_micros": 1764009668444129, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 18:41:08 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:41:08.444153) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 18:41:08 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2600139, prev total WAL file size 2600139, number of live WAL files 2.
Nov 24 18:41:08 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:41:08 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:41:08.445134) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Nov 24 18:41:08 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 18:41:08 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2501KB)], [35(6733KB)]
Nov 24 18:41:08 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009668445178, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9456532, "oldest_snapshot_seqno": -1}
Nov 24 18:41:08 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4000 keys, 7693577 bytes, temperature: kUnknown
Nov 24 18:41:08 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009668485548, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7693577, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7664683, "index_size": 17776, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 97757, "raw_average_key_size": 24, "raw_value_size": 7590113, "raw_average_value_size": 1897, "num_data_blocks": 753, "num_entries": 4000, "num_filter_entries": 4000, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008324, "oldest_key_time": 0, "file_creation_time": 1764009668, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Nov 24 18:41:08 compute-0 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 18:41:08 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:41:08.485761) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7693577 bytes
Nov 24 18:41:08 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:41:08.487588) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 233.9 rd, 190.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 6.6 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(6.7) write-amplify(3.0) OK, records in: 4514, records dropped: 514 output_compression: NoCompression
Nov 24 18:41:08 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:41:08.487603) EVENT_LOG_v1 {"time_micros": 1764009668487595, "job": 16, "event": "compaction_finished", "compaction_time_micros": 40436, "compaction_time_cpu_micros": 15433, "output_level": 6, "num_output_files": 1, "total_output_size": 7693577, "num_input_records": 4514, "num_output_records": 4000, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 18:41:08 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:41:08 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009668488093, "job": 16, "event": "table_file_deletion", "file_number": 37}
Nov 24 18:41:08 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:41:08 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009668489090, "job": 16, "event": "table_file_deletion", "file_number": 35}
Nov 24 18:41:08 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:41:08.445049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:41:08 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:41:08.489164) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:41:08 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:41:08.489169) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:41:08 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:41:08.489171) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:41:08 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:41:08.489173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:41:08 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:41:08.489175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:41:08 compute-0 sudo[248874]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:08 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 24 18:41:08 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 18:41:08 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:41:08 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:41:08 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:41:08 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:41:08 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:41:08 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:41:08 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev b42278e5-2095-4e17-812a-4b5d103f408d does not exist
Nov 24 18:41:08 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 04508045-5407-46f3-b7b1-9e2ae4e1570e does not exist
Nov 24 18:41:08 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 6054c3f0-b48c-46ce-928e-40153215bb2c does not exist
Nov 24 18:41:08 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 18:41:08 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:41:08 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 18:41:08 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:41:08 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:41:08 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:41:08 compute-0 sudo[249028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:41:08 compute-0 sudo[249028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:41:08 compute-0 sudo[249028]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:08 compute-0 sudo[249077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:41:08 compute-0 sudo[249077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:41:08 compute-0 sudo[249077]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:08 compute-0 sudo[249127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yidibjawgufqhfxcnkhbqctnorxzablj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009668.5893915-462-241614243729571/AnsiballZ_stat.py'
Nov 24 18:41:08 compute-0 sudo[249127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:08 compute-0 sudo[249130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:41:08 compute-0 sudo[249130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:41:08 compute-0 sudo[249130]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:09 compute-0 sudo[249156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 18:41:09 compute-0 sudo[249156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:41:09 compute-0 python3.9[249134]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:41:09 compute-0 sudo[249127]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:09 compute-0 podman[249281]: 2025-11-24 18:41:09.353116823 +0000 UTC m=+0.045671112 container create c5e1551bafa4fb441e42fa0498b39e68d481ad0deae6cd8035178447f00bde67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_haslett, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:41:09 compute-0 systemd[1]: Started libpod-conmon-c5e1551bafa4fb441e42fa0498b39e68d481ad0deae6cd8035178447f00bde67.scope.
Nov 24 18:41:09 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v758: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:09 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:41:09 compute-0 podman[249281]: 2025-11-24 18:41:09.334379429 +0000 UTC m=+0.026933768 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:41:09 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 18:41:09 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:41:09 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:41:09 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:41:09 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:41:09 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:41:09 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:41:09 compute-0 podman[249281]: 2025-11-24 18:41:09.44020663 +0000 UTC m=+0.132760969 container init c5e1551bafa4fb441e42fa0498b39e68d481ad0deae6cd8035178447f00bde67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:41:09 compute-0 podman[249281]: 2025-11-24 18:41:09.446578518 +0000 UTC m=+0.139132817 container start c5e1551bafa4fb441e42fa0498b39e68d481ad0deae6cd8035178447f00bde67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_haslett, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 24 18:41:09 compute-0 podman[249281]: 2025-11-24 18:41:09.449699555 +0000 UTC m=+0.142253854 container attach c5e1551bafa4fb441e42fa0498b39e68d481ad0deae6cd8035178447f00bde67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_haslett, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:41:09 compute-0 eloquent_haslett[249328]: 167 167
Nov 24 18:41:09 compute-0 systemd[1]: libpod-c5e1551bafa4fb441e42fa0498b39e68d481ad0deae6cd8035178447f00bde67.scope: Deactivated successfully.
Nov 24 18:41:09 compute-0 podman[249281]: 2025-11-24 18:41:09.453863708 +0000 UTC m=+0.146418057 container died c5e1551bafa4fb441e42fa0498b39e68d481ad0deae6cd8035178447f00bde67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_haslett, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 18:41:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d0ff6b40b8ebc2e3a9debbd5fa4672e1c7b7a14d818ebc82c4ff9696fa90faf-merged.mount: Deactivated successfully.
Nov 24 18:41:09 compute-0 podman[249281]: 2025-11-24 18:41:09.496595066 +0000 UTC m=+0.189149355 container remove c5e1551bafa4fb441e42fa0498b39e68d481ad0deae6cd8035178447f00bde67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 18:41:09 compute-0 sudo[249373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdahavpzdjcybgjhphauppqoewxaecuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009668.5893915-462-241614243729571/AnsiballZ_copy.py'
Nov 24 18:41:09 compute-0 sudo[249373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:09 compute-0 systemd[1]: libpod-conmon-c5e1551bafa4fb441e42fa0498b39e68d481ad0deae6cd8035178447f00bde67.scope: Deactivated successfully.
Nov 24 18:41:09 compute-0 podman[249383]: 2025-11-24 18:41:09.661427739 +0000 UTC m=+0.043645332 container create d9f5faadfbc74fabee6b0d1e1e5451760509be7ff2de3279d1d2a2cba1ee362c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bouman, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:41:09 compute-0 systemd[1]: Started libpod-conmon-d9f5faadfbc74fabee6b0d1e1e5451760509be7ff2de3279d1d2a2cba1ee362c.scope.
Nov 24 18:41:09 compute-0 python3.9[249377]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764009668.5893915-462-241614243729571/.source.json _original_basename=.cnpgpgjt follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:41:09 compute-0 sudo[249373]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:09 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:41:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a87554e8238f355feb97101aa59f69cd5731889f9876a905b4da660d1f22b47d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:41:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a87554e8238f355feb97101aa59f69cd5731889f9876a905b4da660d1f22b47d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:41:09 compute-0 podman[249383]: 2025-11-24 18:41:09.645080704 +0000 UTC m=+0.027298307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:41:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a87554e8238f355feb97101aa59f69cd5731889f9876a905b4da660d1f22b47d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:41:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a87554e8238f355feb97101aa59f69cd5731889f9876a905b4da660d1f22b47d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:41:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a87554e8238f355feb97101aa59f69cd5731889f9876a905b4da660d1f22b47d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:41:09 compute-0 podman[249383]: 2025-11-24 18:41:09.754528504 +0000 UTC m=+0.136746117 container init d9f5faadfbc74fabee6b0d1e1e5451760509be7ff2de3279d1d2a2cba1ee362c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:41:09 compute-0 podman[249383]: 2025-11-24 18:41:09.764063791 +0000 UTC m=+0.146281384 container start d9f5faadfbc74fabee6b0d1e1e5451760509be7ff2de3279d1d2a2cba1ee362c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 24 18:41:09 compute-0 podman[249383]: 2025-11-24 18:41:09.76807326 +0000 UTC m=+0.150290853 container attach d9f5faadfbc74fabee6b0d1e1e5451760509be7ff2de3279d1d2a2cba1ee362c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 18:41:10 compute-0 sudo[249552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgqorsseniahmvcjkhdzvwxlnwsaqcqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009669.8958771-477-53424752298051/AnsiballZ_file.py'
Nov 24 18:41:10 compute-0 sudo[249552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:10 compute-0 python3.9[249554]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:41:10 compute-0 sudo[249552]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:10 compute-0 cool_bouman[249398]: --> passed data devices: 0 physical, 3 LVM
Nov 24 18:41:10 compute-0 cool_bouman[249398]: --> relative data size: 1.0
Nov 24 18:41:10 compute-0 cool_bouman[249398]: --> All data devices are unavailable
Nov 24 18:41:10 compute-0 ceph-mon[74927]: pgmap v758: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:10 compute-0 systemd[1]: libpod-d9f5faadfbc74fabee6b0d1e1e5451760509be7ff2de3279d1d2a2cba1ee362c.scope: Deactivated successfully.
Nov 24 18:41:10 compute-0 podman[249383]: 2025-11-24 18:41:10.745794795 +0000 UTC m=+1.128012408 container died d9f5faadfbc74fabee6b0d1e1e5451760509be7ff2de3279d1d2a2cba1ee362c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bouman, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 24 18:41:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-a87554e8238f355feb97101aa59f69cd5731889f9876a905b4da660d1f22b47d-merged.mount: Deactivated successfully.
Nov 24 18:41:10 compute-0 podman[249383]: 2025-11-24 18:41:10.796554412 +0000 UTC m=+1.178772005 container remove d9f5faadfbc74fabee6b0d1e1e5451760509be7ff2de3279d1d2a2cba1ee362c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bouman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:41:10 compute-0 systemd[1]: libpod-conmon-d9f5faadfbc74fabee6b0d1e1e5451760509be7ff2de3279d1d2a2cba1ee362c.scope: Deactivated successfully.
Nov 24 18:41:10 compute-0 sudo[249156]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:10 compute-0 sudo[249757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsjybolxkpjuhvxaklyshqfppnrmolxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009670.625078-485-232092214377691/AnsiballZ_stat.py'
Nov 24 18:41:10 compute-0 sudo[249757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:10 compute-0 sudo[249724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:41:10 compute-0 sudo[249724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:41:10 compute-0 sudo[249724]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:10 compute-0 sudo[249767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:41:10 compute-0 sudo[249767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:41:10 compute-0 sudo[249767]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:10 compute-0 sudo[249792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:41:10 compute-0 sudo[249792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:41:10 compute-0 sudo[249792]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:11 compute-0 sudo[249817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- lvm list --format json
Nov 24 18:41:11 compute-0 sudo[249817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:41:11 compute-0 sudo[249757]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:11 compute-0 podman[249952]: 2025-11-24 18:41:11.312171162 +0000 UTC m=+0.038600567 container create 7cb0722ca40b1f110773e2d4ef594ae7de488db75e7ee51c0d481dfdf45434cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Nov 24 18:41:11 compute-0 systemd[1]: Started libpod-conmon-7cb0722ca40b1f110773e2d4ef594ae7de488db75e7ee51c0d481dfdf45434cd.scope.
Nov 24 18:41:11 compute-0 sudo[250018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvxujqqokbkclowhxxnytqnewyipyyiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009670.625078-485-232092214377691/AnsiballZ_copy.py'
Nov 24 18:41:11 compute-0 sudo[250018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:11 compute-0 podman[249952]: 2025-11-24 18:41:11.295669363 +0000 UTC m=+0.022098838 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:41:11 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:41:11 compute-0 podman[249952]: 2025-11-24 18:41:11.4146561 +0000 UTC m=+0.141085515 container init 7cb0722ca40b1f110773e2d4ef594ae7de488db75e7ee51c0d481dfdf45434cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_lovelace, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 18:41:11 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v759: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:11 compute-0 podman[249952]: 2025-11-24 18:41:11.420999787 +0000 UTC m=+0.147429182 container start 7cb0722ca40b1f110773e2d4ef594ae7de488db75e7ee51c0d481dfdf45434cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_lovelace, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 18:41:11 compute-0 podman[249952]: 2025-11-24 18:41:11.423969291 +0000 UTC m=+0.150398706 container attach 7cb0722ca40b1f110773e2d4ef594ae7de488db75e7ee51c0d481dfdf45434cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 18:41:11 compute-0 adoring_lovelace[250020]: 167 167
Nov 24 18:41:11 compute-0 systemd[1]: libpod-7cb0722ca40b1f110773e2d4ef594ae7de488db75e7ee51c0d481dfdf45434cd.scope: Deactivated successfully.
Nov 24 18:41:11 compute-0 conmon[250020]: conmon 7cb0722ca40b1f110773 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7cb0722ca40b1f110773e2d4ef594ae7de488db75e7ee51c0d481dfdf45434cd.scope/container/memory.events
Nov 24 18:41:11 compute-0 podman[249952]: 2025-11-24 18:41:11.425983951 +0000 UTC m=+0.152413366 container died 7cb0722ca40b1f110773e2d4ef594ae7de488db75e7ee51c0d481dfdf45434cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_lovelace, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 24 18:41:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-7bcb754fd75c70343c4a264b3308ce7a62e18f0d9294450041fb39a370e1dd42-merged.mount: Deactivated successfully.
Nov 24 18:41:11 compute-0 podman[249952]: 2025-11-24 18:41:11.458317022 +0000 UTC m=+0.184746417 container remove 7cb0722ca40b1f110773e2d4ef594ae7de488db75e7ee51c0d481dfdf45434cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_lovelace, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:41:11 compute-0 systemd[1]: libpod-conmon-7cb0722ca40b1f110773e2d4ef594ae7de488db75e7ee51c0d481dfdf45434cd.scope: Deactivated successfully.
Nov 24 18:41:11 compute-0 sudo[250018]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:11 compute-0 podman[250043]: 2025-11-24 18:41:11.609848145 +0000 UTC m=+0.040456743 container create 681088b41182cd2cfdb95f7050f17836f7a3e29b85bfc40a1e32ae024037bb85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 24 18:41:11 compute-0 systemd[1]: Started libpod-conmon-681088b41182cd2cfdb95f7050f17836f7a3e29b85bfc40a1e32ae024037bb85.scope.
Nov 24 18:41:11 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:41:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc212d3a948f8e5a9d041a394b8eb085c523c82902292b029f28c5cc0c4d96ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:41:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc212d3a948f8e5a9d041a394b8eb085c523c82902292b029f28c5cc0c4d96ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:41:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc212d3a948f8e5a9d041a394b8eb085c523c82902292b029f28c5cc0c4d96ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:41:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc212d3a948f8e5a9d041a394b8eb085c523c82902292b029f28c5cc0c4d96ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:41:11 compute-0 podman[250043]: 2025-11-24 18:41:11.668332233 +0000 UTC m=+0.098940841 container init 681088b41182cd2cfdb95f7050f17836f7a3e29b85bfc40a1e32ae024037bb85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_montalcini, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 18:41:11 compute-0 podman[250043]: 2025-11-24 18:41:11.675031899 +0000 UTC m=+0.105640487 container start 681088b41182cd2cfdb95f7050f17836f7a3e29b85bfc40a1e32ae024037bb85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_montalcini, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 18:41:11 compute-0 podman[250043]: 2025-11-24 18:41:11.678412563 +0000 UTC m=+0.109021171 container attach 681088b41182cd2cfdb95f7050f17836f7a3e29b85bfc40a1e32ae024037bb85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_montalcini, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 18:41:11 compute-0 podman[250043]: 2025-11-24 18:41:11.591801268 +0000 UTC m=+0.022409876 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:41:12 compute-0 sudo[250213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwernytlcmujhvgyvwysqdxsmzatwuku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009671.880079-502-13660567653410/AnsiballZ_container_config_data.py'
Nov 24 18:41:12 compute-0 sudo[250213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]: {
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:     "0": [
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:         {
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:             "devices": [
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:                 "/dev/loop3"
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:             ],
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:             "lv_name": "ceph_lv0",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:             "lv_size": "21470642176",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:             "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:             "name": "ceph_lv0",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:             "tags": {
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:                 "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:                 "ceph.cluster_name": "ceph",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:                 "ceph.crush_device_class": "",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:                 "ceph.encrypted": "0",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:                 "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:                 "ceph.osd_id": "0",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:                 "ceph.type": "block",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:                 "ceph.vdo": "0"
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:             },
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:             "type": "block",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:             "vg_name": "ceph_vg0"
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:         }
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:     ],
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:     "1": [
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:         {
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:             "devices": [
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:                 "/dev/loop4"
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:             ],
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:             "lv_name": "ceph_lv1",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:             "lv_size": "21470642176",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:             "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:             "name": "ceph_lv1",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:             "tags": {
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:                 "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:                 "ceph.cluster_name": "ceph",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:                 "ceph.crush_device_class": "",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:                 "ceph.encrypted": "0",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:                 "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:                 "ceph.osd_id": "1",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:                 "ceph.type": "block",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:                 "ceph.vdo": "0"
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:             },
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:             "type": "block",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:             "vg_name": "ceph_vg1"
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:         }
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:     ],
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:     "2": [
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:         {
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:             "devices": [
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:                 "/dev/loop5"
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:             ],
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:             "lv_name": "ceph_lv2",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:             "lv_size": "21470642176",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:             "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:             "name": "ceph_lv2",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:             "tags": {
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:                 "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:                 "ceph.cluster_name": "ceph",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:                 "ceph.crush_device_class": "",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:                 "ceph.encrypted": "0",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:                 "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:                 "ceph.osd_id": "2",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:                 "ceph.type": "block",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:                 "ceph.vdo": "0"
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:             },
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:             "type": "block",
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:             "vg_name": "ceph_vg2"
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:         }
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]:     ]
Nov 24 18:41:12 compute-0 vigilant_montalcini[250083]: }
Nov 24 18:41:12 compute-0 python3.9[250215]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 24 18:41:12 compute-0 systemd[1]: libpod-681088b41182cd2cfdb95f7050f17836f7a3e29b85bfc40a1e32ae024037bb85.scope: Deactivated successfully.
Nov 24 18:41:12 compute-0 podman[250043]: 2025-11-24 18:41:12.406173117 +0000 UTC m=+0.836781705 container died 681088b41182cd2cfdb95f7050f17836f7a3e29b85bfc40a1e32ae024037bb85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_montalcini, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 24 18:41:12 compute-0 sudo[250213]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc212d3a948f8e5a9d041a394b8eb085c523c82902292b029f28c5cc0c4d96ba-merged.mount: Deactivated successfully.
Nov 24 18:41:12 compute-0 podman[250043]: 2025-11-24 18:41:12.4575838 +0000 UTC m=+0.888192378 container remove 681088b41182cd2cfdb95f7050f17836f7a3e29b85bfc40a1e32ae024037bb85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_montalcini, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:41:12 compute-0 systemd[1]: libpod-conmon-681088b41182cd2cfdb95f7050f17836f7a3e29b85bfc40a1e32ae024037bb85.scope: Deactivated successfully.
Nov 24 18:41:12 compute-0 sudo[249817]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:12 compute-0 sudo[250254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:41:12 compute-0 sudo[250254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:41:12 compute-0 sudo[250254]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:12 compute-0 sudo[250279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:41:12 compute-0 sudo[250279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:41:12 compute-0 sudo[250279]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:12 compute-0 sudo[250304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:41:12 compute-0 sudo[250304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:41:12 compute-0 sudo[250304]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:12 compute-0 sudo[250329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- raw list --format json
Nov 24 18:41:12 compute-0 sudo[250329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:41:12 compute-0 ceph-mon[74927]: pgmap v759: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:41:12 compute-0 podman[250447]: 2025-11-24 18:41:12.994947478 +0000 UTC m=+0.040344670 container create b3361314280a0151ba558271a5a15287b6ad829663d49a41867f2bf7340b425b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 18:41:13 compute-0 systemd[1]: Started libpod-conmon-b3361314280a0151ba558271a5a15287b6ad829663d49a41867f2bf7340b425b.scope.
Nov 24 18:41:13 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:41:13 compute-0 podman[250447]: 2025-11-24 18:41:13.068698055 +0000 UTC m=+0.114095267 container init b3361314280a0151ba558271a5a15287b6ad829663d49a41867f2bf7340b425b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:41:13 compute-0 podman[250447]: 2025-11-24 18:41:13.074762415 +0000 UTC m=+0.120159607 container start b3361314280a0151ba558271a5a15287b6ad829663d49a41867f2bf7340b425b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 18:41:13 compute-0 podman[250447]: 2025-11-24 18:41:12.98010281 +0000 UTC m=+0.025500022 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:41:13 compute-0 gracious_mahavira[250480]: 167 167
Nov 24 18:41:13 compute-0 podman[250447]: 2025-11-24 18:41:13.078120298 +0000 UTC m=+0.123517490 container attach b3361314280a0151ba558271a5a15287b6ad829663d49a41867f2bf7340b425b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mahavira, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 24 18:41:13 compute-0 systemd[1]: libpod-b3361314280a0151ba558271a5a15287b6ad829663d49a41867f2bf7340b425b.scope: Deactivated successfully.
Nov 24 18:41:13 compute-0 podman[250447]: 2025-11-24 18:41:13.078661711 +0000 UTC m=+0.124058903 container died b3361314280a0151ba558271a5a15287b6ad829663d49a41867f2bf7340b425b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:41:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0b026f93abec062ffd88fd744882dc80f7a035f6ebedb975394dae88146d016-merged.mount: Deactivated successfully.
Nov 24 18:41:13 compute-0 podman[250447]: 2025-11-24 18:41:13.112316465 +0000 UTC m=+0.157713657 container remove b3361314280a0151ba558271a5a15287b6ad829663d49a41867f2bf7340b425b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mahavira, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:41:13 compute-0 systemd[1]: libpod-conmon-b3361314280a0151ba558271a5a15287b6ad829663d49a41867f2bf7340b425b.scope: Deactivated successfully.
Nov 24 18:41:13 compute-0 sudo[250554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctjtffslatotpnlmkwarwajzgehlxonm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009672.7749295-511-89061176657887/AnsiballZ_container_config_hash.py'
Nov 24 18:41:13 compute-0 sudo[250554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:13 compute-0 podman[250562]: 2025-11-24 18:41:13.267223981 +0000 UTC m=+0.043808736 container create d3a6e5bd6e49e5b16986ced264a802fff538a952f9099c7fccf0c32a5f452227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 18:41:13 compute-0 systemd[1]: Started libpod-conmon-d3a6e5bd6e49e5b16986ced264a802fff538a952f9099c7fccf0c32a5f452227.scope.
Nov 24 18:41:13 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:41:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea473da794bbcc08bb7188494098b1af7e3e10be298484e9341b074cb6aa61b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:41:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea473da794bbcc08bb7188494098b1af7e3e10be298484e9341b074cb6aa61b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:41:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea473da794bbcc08bb7188494098b1af7e3e10be298484e9341b074cb6aa61b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:41:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea473da794bbcc08bb7188494098b1af7e3e10be298484e9341b074cb6aa61b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:41:13 compute-0 podman[250562]: 2025-11-24 18:41:13.250112368 +0000 UTC m=+0.026697123 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:41:13 compute-0 podman[250562]: 2025-11-24 18:41:13.344884425 +0000 UTC m=+0.121469160 container init d3a6e5bd6e49e5b16986ced264a802fff538a952f9099c7fccf0c32a5f452227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 24 18:41:13 compute-0 podman[250562]: 2025-11-24 18:41:13.350624077 +0000 UTC m=+0.127208812 container start d3a6e5bd6e49e5b16986ced264a802fff538a952f9099c7fccf0c32a5f452227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_raman, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:41:13 compute-0 podman[250562]: 2025-11-24 18:41:13.353675873 +0000 UTC m=+0.130260608 container attach d3a6e5bd6e49e5b16986ced264a802fff538a952f9099c7fccf0c32a5f452227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_raman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:41:13 compute-0 python3.9[250556]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 24 18:41:13 compute-0 sudo[250554]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:13 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v760: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:14 compute-0 sudo[250741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzmurootlrqcoqtdqkwyritpieusncnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009673.6235952-520-65995360835950/AnsiballZ_podman_container_info.py'
Nov 24 18:41:14 compute-0 sudo[250741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:14 compute-0 zen_raman[250579]: {
Nov 24 18:41:14 compute-0 zen_raman[250579]:     "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 18:41:14 compute-0 zen_raman[250579]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:41:14 compute-0 zen_raman[250579]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 18:41:14 compute-0 zen_raman[250579]:         "osd_id": 0,
Nov 24 18:41:14 compute-0 zen_raman[250579]:         "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:41:14 compute-0 zen_raman[250579]:         "type": "bluestore"
Nov 24 18:41:14 compute-0 zen_raman[250579]:     },
Nov 24 18:41:14 compute-0 zen_raman[250579]:     "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 18:41:14 compute-0 zen_raman[250579]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:41:14 compute-0 zen_raman[250579]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 18:41:14 compute-0 zen_raman[250579]:         "osd_id": 1,
Nov 24 18:41:14 compute-0 zen_raman[250579]:         "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:41:14 compute-0 zen_raman[250579]:         "type": "bluestore"
Nov 24 18:41:14 compute-0 zen_raman[250579]:     },
Nov 24 18:41:14 compute-0 zen_raman[250579]:     "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 18:41:14 compute-0 zen_raman[250579]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:41:14 compute-0 zen_raman[250579]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 18:41:14 compute-0 zen_raman[250579]:         "osd_id": 2,
Nov 24 18:41:14 compute-0 zen_raman[250579]:         "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:41:14 compute-0 zen_raman[250579]:         "type": "bluestore"
Nov 24 18:41:14 compute-0 zen_raman[250579]:     }
Nov 24 18:41:14 compute-0 zen_raman[250579]: }
Nov 24 18:41:14 compute-0 systemd[1]: libpod-d3a6e5bd6e49e5b16986ced264a802fff538a952f9099c7fccf0c32a5f452227.scope: Deactivated successfully.
Nov 24 18:41:14 compute-0 podman[250562]: 2025-11-24 18:41:14.266696105 +0000 UTC m=+1.043280840 container died d3a6e5bd6e49e5b16986ced264a802fff538a952f9099c7fccf0c32a5f452227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_raman, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 24 18:41:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ea473da794bbcc08bb7188494098b1af7e3e10be298484e9341b074cb6aa61b-merged.mount: Deactivated successfully.
Nov 24 18:41:14 compute-0 python3.9[250746]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 24 18:41:14 compute-0 podman[250562]: 2025-11-24 18:41:14.314250463 +0000 UTC m=+1.090835198 container remove d3a6e5bd6e49e5b16986ced264a802fff538a952f9099c7fccf0c32a5f452227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:41:14 compute-0 systemd[1]: libpod-conmon-d3a6e5bd6e49e5b16986ced264a802fff538a952f9099c7fccf0c32a5f452227.scope: Deactivated successfully.
Nov 24 18:41:14 compute-0 sudo[250329]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:14 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:41:14 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:41:14 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:41:14 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:41:14 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 15b106ed-7f40-4254-9528-f4d7022764c1 does not exist
Nov 24 18:41:14 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 72eaa573-16f4-4ab7-9018-4fe8e4ddd83e does not exist
Nov 24 18:41:14 compute-0 sudo[250796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:41:14 compute-0 sudo[250796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:41:14 compute-0 sudo[250796]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:14 compute-0 sudo[250741]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:14 compute-0 sudo[250828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:41:14 compute-0 sudo[250828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:41:14 compute-0 sudo[250828]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:15 compute-0 ceph-mon[74927]: pgmap v760: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:15 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:41:15 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:41:15 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v761: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:15 compute-0 sudo[251002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emvuizzmyputyshfmdefhcrkoynuaezu ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764009675.0950751-533-38305640024907/AnsiballZ_edpm_container_manage.py'
Nov 24 18:41:15 compute-0 sudo[251002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:15 compute-0 python3[251004]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 24 18:41:17 compute-0 podman[251016]: 2025-11-24 18:41:17.079030526 +0000 UTC m=+1.153511749 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 24 18:41:17 compute-0 podman[251075]: 2025-11-24 18:41:17.199575111 +0000 UTC m=+0.038932305 container create e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 24 18:41:17 compute-0 podman[251075]: 2025-11-24 18:41:17.17932022 +0000 UTC m=+0.018677414 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 24 18:41:17 compute-0 python3[251004]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 24 18:41:17 compute-0 sudo[251002]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:17 compute-0 ceph-mon[74927]: pgmap v761: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:17 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v762: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:17 compute-0 sudo[251261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtwdnhqhmbqxflofajbdckceluwrgoog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009677.5099092-541-52600692909526/AnsiballZ_stat.py'
Nov 24 18:41:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:41:17 compute-0 sudo[251261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:17 compute-0 python3.9[251263]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:41:18 compute-0 sudo[251261]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:18 compute-0 ceph-mon[74927]: pgmap v762: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:18 compute-0 sudo[251415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzrlcnakghyatqcdwknwbbpcghvrepty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009678.2891164-550-129382189622608/AnsiballZ_file.py'
Nov 24 18:41:18 compute-0 sudo[251415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:18 compute-0 python3.9[251417]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:41:18 compute-0 sudo[251415]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:19 compute-0 sudo[251491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skaxvsucwwbdhjlmjewxmwqossxdxkgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009678.2891164-550-129382189622608/AnsiballZ_stat.py'
Nov 24 18:41:19 compute-0 sudo[251491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:19 compute-0 python3.9[251493]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:41:19 compute-0 sudo[251491]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:19 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v763: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:19 compute-0 sudo[251642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccvfwroytsdhwizxwyfgsaecirphuuul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009679.2710717-550-48659998448379/AnsiballZ_copy.py'
Nov 24 18:41:19 compute-0 sudo[251642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:20 compute-0 python3.9[251644]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764009679.2710717-550-48659998448379/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:41:20 compute-0 sudo[251642]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:20 compute-0 sudo[251718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhsjgbotftnkfiquuayhktnejuideogc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009679.2710717-550-48659998448379/AnsiballZ_systemd.py'
Nov 24 18:41:20 compute-0 sudo[251718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:20 compute-0 ceph-mon[74927]: pgmap v763: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:20 compute-0 python3.9[251720]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 18:41:20 compute-0 systemd[1]: Reloading.
Nov 24 18:41:20 compute-0 systemd-sysv-generator[251750]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:41:20 compute-0 systemd-rc-local-generator[251745]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:41:20 compute-0 sudo[251718]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:21 compute-0 sudo[251829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beqvuhkyvguooazptunopoghgxenzkzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009679.2710717-550-48659998448379/AnsiballZ_systemd.py'
Nov 24 18:41:21 compute-0 sudo[251829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:21 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v764: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:21 compute-0 python3.9[251831]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:41:21 compute-0 systemd[1]: Reloading.
Nov 24 18:41:21 compute-0 systemd-rc-local-generator[251857]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:41:21 compute-0 systemd-sysv-generator[251863]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:41:21 compute-0 systemd[1]: Starting multipathd container...
Nov 24 18:41:21 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:41:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd885d65a09ab530fdcefa9259171e234eb21e348f6a93688aff9a4d5f7c1db2/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 24 18:41:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd885d65a09ab530fdcefa9259171e234eb21e348f6a93688aff9a4d5f7c1db2/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 24 18:41:21 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514.
Nov 24 18:41:21 compute-0 podman[251871]: 2025-11-24 18:41:21.979622576 +0000 UTC m=+0.112662141 container init e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 18:41:21 compute-0 multipathd[251886]: + sudo -E kolla_set_configs
Nov 24 18:41:22 compute-0 sudo[251892]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 24 18:41:22 compute-0 sudo[251892]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 24 18:41:22 compute-0 sudo[251892]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 24 18:41:22 compute-0 podman[251871]: 2025-11-24 18:41:22.006886041 +0000 UTC m=+0.139925596 container start e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 18:41:22 compute-0 podman[251871]: multipathd
Nov 24 18:41:22 compute-0 systemd[1]: Started multipathd container.
Nov 24 18:41:22 compute-0 sudo[251829]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:22 compute-0 multipathd[251886]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 18:41:22 compute-0 multipathd[251886]: INFO:__main__:Validating config file
Nov 24 18:41:22 compute-0 multipathd[251886]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 18:41:22 compute-0 multipathd[251886]: INFO:__main__:Writing out command to execute
Nov 24 18:41:22 compute-0 sudo[251892]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:22 compute-0 multipathd[251886]: ++ cat /run_command
Nov 24 18:41:22 compute-0 multipathd[251886]: + CMD='/usr/sbin/multipathd -d'
Nov 24 18:41:22 compute-0 multipathd[251886]: + ARGS=
Nov 24 18:41:22 compute-0 multipathd[251886]: + sudo kolla_copy_cacerts
Nov 24 18:41:22 compute-0 podman[251893]: 2025-11-24 18:41:22.078338531 +0000 UTC m=+0.052858170 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 18:41:22 compute-0 sudo[251917]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 24 18:41:22 compute-0 sudo[251917]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 24 18:41:22 compute-0 sudo[251917]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 24 18:41:22 compute-0 sudo[251917]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:22 compute-0 multipathd[251886]: + [[ ! -n '' ]]
Nov 24 18:41:22 compute-0 multipathd[251886]: + . kolla_extend_start
Nov 24 18:41:22 compute-0 multipathd[251886]: Running command: '/usr/sbin/multipathd -d'
Nov 24 18:41:22 compute-0 multipathd[251886]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 24 18:41:22 compute-0 multipathd[251886]: + umask 0022
Nov 24 18:41:22 compute-0 multipathd[251886]: + exec /usr/sbin/multipathd -d
Nov 24 18:41:22 compute-0 systemd[1]: e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514-377f3d61fd3065a5.service: Main process exited, code=exited, status=1/FAILURE
Nov 24 18:41:22 compute-0 systemd[1]: e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514-377f3d61fd3065a5.service: Failed with result 'exit-code'.
Nov 24 18:41:22 compute-0 multipathd[251886]: 3413.787407 | --------start up--------
Nov 24 18:41:22 compute-0 multipathd[251886]: 3413.787427 | read /etc/multipath.conf
Nov 24 18:41:22 compute-0 multipathd[251886]: 3413.792799 | path checkers start up
Nov 24 18:41:22 compute-0 ceph-mon[74927]: pgmap v764: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:22 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 24 18:41:22 compute-0 python3.9[252077]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:41:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:41:22.733 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:41:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:41:22.733 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:41:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:41:22.733 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:41:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:41:23 compute-0 sudo[252248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtfmrtauphwipqnipkjtnoqrkaucicdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009682.8720434-586-191971517047608/AnsiballZ_command.py'
Nov 24 18:41:23 compute-0 sudo[252248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:23 compute-0 podman[252204]: 2025-11-24 18:41:23.249076526 +0000 UTC m=+0.101599247 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller)
Nov 24 18:41:23 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v765: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:23 compute-0 python3.9[252256]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:41:23 compute-0 sudo[252248]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:23 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 24 18:41:24 compute-0 sudo[252424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zemfqojszenmcjmcpdoalqktnsdjlxtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009683.6900742-594-174408869800030/AnsiballZ_systemd.py'
Nov 24 18:41:24 compute-0 sudo[252424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:24 compute-0 python3.9[252426]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 18:41:24 compute-0 systemd[1]: Stopping multipathd container...
Nov 24 18:41:24 compute-0 multipathd[251886]: 3416.110390 | exit (signal)
Nov 24 18:41:24 compute-0 multipathd[251886]: 3416.110892 | --------shut down-------
Nov 24 18:41:24 compute-0 systemd[1]: libpod-e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514.scope: Deactivated successfully.
Nov 24 18:41:24 compute-0 podman[252430]: 2025-11-24 18:41:24.459037202 +0000 UTC m=+0.085261483 container died e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 24 18:41:24 compute-0 systemd[1]: e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514-377f3d61fd3065a5.timer: Deactivated successfully.
Nov 24 18:41:24 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514.
Nov 24 18:41:24 compute-0 ceph-mon[74927]: pgmap v765: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd885d65a09ab530fdcefa9259171e234eb21e348f6a93688aff9a4d5f7c1db2-merged.mount: Deactivated successfully.
Nov 24 18:41:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514-userdata-shm.mount: Deactivated successfully.
Nov 24 18:41:24 compute-0 podman[252430]: 2025-11-24 18:41:24.657083547 +0000 UTC m=+0.283307838 container cleanup e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 24 18:41:24 compute-0 podman[252430]: multipathd
Nov 24 18:41:24 compute-0 podman[252459]: multipathd
Nov 24 18:41:24 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 24 18:41:24 compute-0 systemd[1]: Stopped multipathd container.
Nov 24 18:41:24 compute-0 systemd[1]: Starting multipathd container...
Nov 24 18:41:24 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:41:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd885d65a09ab530fdcefa9259171e234eb21e348f6a93688aff9a4d5f7c1db2/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 24 18:41:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd885d65a09ab530fdcefa9259171e234eb21e348f6a93688aff9a4d5f7c1db2/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 24 18:41:24 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514.
Nov 24 18:41:24 compute-0 podman[252472]: 2025-11-24 18:41:24.862422072 +0000 UTC m=+0.111625755 container init e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:41:24 compute-0 multipathd[252487]: + sudo -E kolla_set_configs
Nov 24 18:41:24 compute-0 podman[252472]: 2025-11-24 18:41:24.891215695 +0000 UTC m=+0.140419378 container start e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.license=GPLv2)
Nov 24 18:41:24 compute-0 sudo[252493]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 24 18:41:24 compute-0 podman[252472]: multipathd
Nov 24 18:41:24 compute-0 sudo[252493]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 24 18:41:24 compute-0 sudo[252493]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 24 18:41:24 compute-0 systemd[1]: Started multipathd container.
Nov 24 18:41:24 compute-0 sudo[252424]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:24 compute-0 multipathd[252487]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 18:41:24 compute-0 multipathd[252487]: INFO:__main__:Validating config file
Nov 24 18:41:24 compute-0 multipathd[252487]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 18:41:24 compute-0 multipathd[252487]: INFO:__main__:Writing out command to execute
Nov 24 18:41:24 compute-0 sudo[252493]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:24 compute-0 multipathd[252487]: ++ cat /run_command
Nov 24 18:41:24 compute-0 multipathd[252487]: + CMD='/usr/sbin/multipathd -d'
Nov 24 18:41:24 compute-0 multipathd[252487]: + ARGS=
Nov 24 18:41:24 compute-0 multipathd[252487]: + sudo kolla_copy_cacerts
Nov 24 18:41:24 compute-0 podman[252494]: 2025-11-24 18:41:24.967582067 +0000 UTC m=+0.066781085 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 24 18:41:24 compute-0 sudo[252519]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 24 18:41:24 compute-0 sudo[252519]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 24 18:41:24 compute-0 sudo[252519]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 24 18:41:24 compute-0 sudo[252519]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:24 compute-0 multipathd[252487]: + [[ ! -n '' ]]
Nov 24 18:41:24 compute-0 multipathd[252487]: + . kolla_extend_start
Nov 24 18:41:24 compute-0 multipathd[252487]: Running command: '/usr/sbin/multipathd -d'
Nov 24 18:41:24 compute-0 multipathd[252487]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 24 18:41:24 compute-0 multipathd[252487]: + umask 0022
Nov 24 18:41:24 compute-0 multipathd[252487]: + exec /usr/sbin/multipathd -d
Nov 24 18:41:24 compute-0 systemd[1]: e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514-75097967f463f0eb.service: Main process exited, code=exited, status=1/FAILURE
Nov 24 18:41:24 compute-0 systemd[1]: e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514-75097967f463f0eb.service: Failed with result 'exit-code'.
Nov 24 18:41:24 compute-0 multipathd[252487]: 3416.682967 | --------start up--------
Nov 24 18:41:24 compute-0 multipathd[252487]: 3416.682983 | read /etc/multipath.conf
Nov 24 18:41:24 compute-0 multipathd[252487]: 3416.687791 | path checkers start up
Nov 24 18:41:25 compute-0 podman[252526]: 2025-11-24 18:41:25.045120497 +0000 UTC m=+0.050460941 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3)
Nov 24 18:41:25 compute-0 sudo[252695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhvlnjtqusjuglmwmdrzhyiseyfmsjhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009685.1081164-602-256031980373911/AnsiballZ_file.py'
Nov 24 18:41:25 compute-0 sudo[252695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:25 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v766: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:25 compute-0 python3.9[252697]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:41:25 compute-0 sudo[252695]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:26 compute-0 sudo[252847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdjagrqfqqejgunttjohuspidxwmpgvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009686.0807939-614-156208892383513/AnsiballZ_file.py'
Nov 24 18:41:26 compute-0 sudo[252847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:26 compute-0 ceph-mon[74927]: pgmap v766: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:26 compute-0 python3.9[252849]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 24 18:41:26 compute-0 sudo[252847]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:27 compute-0 sudo[252999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psuqwpvxpvvjfdlkfzmyflhojacgjino ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009686.8416674-622-38128472592477/AnsiballZ_modprobe.py'
Nov 24 18:41:27 compute-0 sudo[252999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:27 compute-0 python3.9[253001]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 24 18:41:27 compute-0 kernel: Key type psk registered
Nov 24 18:41:27 compute-0 sudo[252999]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:27 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v767: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:41:27 compute-0 sudo[253162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdrnnyvarudsliyuymqtczzspvkstdzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009687.5526044-630-175418874463445/AnsiballZ_stat.py'
Nov 24 18:41:27 compute-0 sudo[253162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:28 compute-0 python3.9[253164]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:41:28 compute-0 sudo[253162]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:28 compute-0 sudo[253285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klzjvieeedgesqavptwyqykllwxgryvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009687.5526044-630-175418874463445/AnsiballZ_copy.py'
Nov 24 18:41:28 compute-0 sudo[253285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:28 compute-0 ceph-mon[74927]: pgmap v767: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:28 compute-0 python3.9[253287]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764009687.5526044-630-175418874463445/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:41:28 compute-0 sudo[253285]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:29 compute-0 sudo[253437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmjwfuifygbsmxtvkadcqhgnpxiynhed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009688.889722-646-66822180780811/AnsiballZ_lineinfile.py'
Nov 24 18:41:29 compute-0 sudo[253437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:29 compute-0 python3.9[253439]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:41:29 compute-0 sudo[253437]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:29 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v768: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:29 compute-0 sudo[253589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckbzlplxkgszeiywoyzlrhdmifhlnmru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009689.61453-654-241178190798845/AnsiballZ_systemd.py'
Nov 24 18:41:29 compute-0 sudo[253589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:30 compute-0 python3.9[253591]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 18:41:30 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 24 18:41:30 compute-0 systemd[1]: Stopped Load Kernel Modules.
Nov 24 18:41:30 compute-0 systemd[1]: Stopping Load Kernel Modules...
Nov 24 18:41:30 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 24 18:41:30 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 24 18:41:30 compute-0 sudo[253589]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:30 compute-0 ceph-mon[74927]: pgmap v768: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:30 compute-0 sudo[253745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmgnfpparuuybxklidmekkoaexhghchx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009690.5503864-662-266980359533898/AnsiballZ_dnf.py'
Nov 24 18:41:30 compute-0 sudo[253745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:31 compute-0 python3.9[253747]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 18:41:31 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v769: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:32 compute-0 ceph-mon[74927]: pgmap v769: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:32 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:41:33 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v770: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:33 compute-0 systemd[1]: Reloading.
Nov 24 18:41:33 compute-0 systemd-sysv-generator[253777]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:41:33 compute-0 systemd-rc-local-generator[253774]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:41:33 compute-0 systemd[1]: Reloading.
Nov 24 18:41:34 compute-0 systemd-rc-local-generator[253814]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:41:34 compute-0 systemd-sysv-generator[253819]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:41:34 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 24 18:41:34 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 24 18:41:34 compute-0 systemd-logind[822]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 24 18:41:34 compute-0 systemd-logind[822]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 24 18:41:34 compute-0 lvm[253863]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 24 18:41:34 compute-0 lvm[253863]: VG ceph_vg1 finished
Nov 24 18:41:34 compute-0 lvm[253861]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 18:41:34 compute-0 lvm[253861]: VG ceph_vg0 finished
Nov 24 18:41:34 compute-0 lvm[253864]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 24 18:41:34 compute-0 lvm[253864]: VG ceph_vg2 finished
Nov 24 18:41:34 compute-0 ceph-mon[74927]: pgmap v770: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:41:34
Nov 24 18:41:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:41:34 compute-0 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 18:41:34 compute-0 ceph-mgr[75218]: [balancer INFO root] pools ['vms', '.mgr', 'volumes', 'backups', '.rgw.root', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'images']
Nov 24 18:41:34 compute-0 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 18:41:34 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 18:41:34 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 24 18:41:34 compute-0 systemd[1]: Reloading.
Nov 24 18:41:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:41:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:41:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:41:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:41:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:41:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:41:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:41:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:41:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:41:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:41:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:41:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:41:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:41:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:41:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:41:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:41:34 compute-0 systemd-sysv-generator[253922]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:41:34 compute-0 systemd-rc-local-generator[253917]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:41:35 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 18:41:35 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v771: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:35 compute-0 sudo[253745]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:36 compute-0 sudo[255203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjzikpkigowmedfkcqwlqqdqtdthllqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009695.7252777-670-157110111115726/AnsiballZ_systemd_service.py'
Nov 24 18:41:36 compute-0 sudo[255203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:36 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 18:41:36 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 24 18:41:36 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.580s CPU time.
Nov 24 18:41:36 compute-0 systemd[1]: run-r64ca1d792e1a484b90df61e83628ed26.service: Deactivated successfully.
Nov 24 18:41:36 compute-0 python3.9[255205]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 18:41:36 compute-0 systemd[1]: Stopping Open-iSCSI...
Nov 24 18:41:36 compute-0 iscsid[242667]: iscsid shutting down.
Nov 24 18:41:36 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Nov 24 18:41:36 compute-0 systemd[1]: Stopped Open-iSCSI.
Nov 24 18:41:36 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 24 18:41:36 compute-0 systemd[1]: Starting Open-iSCSI...
Nov 24 18:41:36 compute-0 systemd[1]: Started Open-iSCSI.
Nov 24 18:41:36 compute-0 sudo[255203]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:36 compute-0 ceph-mon[74927]: pgmap v771: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:37 compute-0 python3.9[255360]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 18:41:37 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v772: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:41:37 compute-0 sudo[255514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztxzaihmcpjgvuczufqqxgzbfjisikxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009697.6238706-688-122383705382238/AnsiballZ_file.py'
Nov 24 18:41:37 compute-0 sudo[255514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:38 compute-0 python3.9[255516]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:41:38 compute-0 sudo[255514]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:38 compute-0 ceph-mon[74927]: pgmap v772: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:38 compute-0 sudo[255666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxjqdgykmywqrgxgjzrdeexvbpjhrgut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009698.4882324-699-172804666000544/AnsiballZ_systemd_service.py'
Nov 24 18:41:38 compute-0 sudo[255666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:39 compute-0 python3.9[255668]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 18:41:39 compute-0 systemd[1]: Reloading.
Nov 24 18:41:39 compute-0 systemd-sysv-generator[255697]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:41:39 compute-0 systemd-rc-local-generator[255693]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:41:39 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v773: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:39 compute-0 sudo[255666]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:40 compute-0 python3.9[255852]: ansible-ansible.builtin.service_facts Invoked
Nov 24 18:41:40 compute-0 network[255869]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 18:41:40 compute-0 network[255870]: 'network-scripts' will be removed from distribution in near future.
Nov 24 18:41:40 compute-0 network[255871]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 18:41:40 compute-0 ceph-mon[74927]: pgmap v773: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:41 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v774: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:42 compute-0 ceph-mon[74927]: pgmap v774: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:41:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:41:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:41:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:41:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:41:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:41:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:41:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:41:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:41:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:41:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:41:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:41:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:41:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 18:41:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:41:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:41:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:41:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 18:41:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:41:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 18:41:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:41:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:41:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:41:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 18:41:43 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v775: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:44 compute-0 ceph-mon[74927]: pgmap v775: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:44 compute-0 sudo[256144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhgethnvtccnapttnrybdqotwapqfksw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009704.510013-718-31558496967057/AnsiballZ_systemd_service.py'
Nov 24 18:41:44 compute-0 sudo[256144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:45 compute-0 python3.9[256146]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:41:45 compute-0 sudo[256144]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:45 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v776: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:45 compute-0 sudo[256297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bacdtudrkgvlsowbhwwayyplmptdhzcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009705.3039722-718-47055402542228/AnsiballZ_systemd_service.py'
Nov 24 18:41:45 compute-0 sudo[256297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:45 compute-0 python3.9[256299]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:41:45 compute-0 sudo[256297]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:46 compute-0 sudo[256450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptwrlbkhonsjqclbrpdpnyibgmsgxeli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009706.0480502-718-158858832433614/AnsiballZ_systemd_service.py'
Nov 24 18:41:46 compute-0 sudo[256450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:46 compute-0 python3.9[256452]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:41:46 compute-0 sudo[256450]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:46 compute-0 ceph-mon[74927]: pgmap v776: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:46 compute-0 sudo[256603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbyafftkgjwzzwqcdkuhkslvntuqjvqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009706.712329-718-69398171915487/AnsiballZ_systemd_service.py'
Nov 24 18:41:46 compute-0 sudo[256603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:47 compute-0 python3.9[256605]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:41:47 compute-0 sudo[256603]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:47 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v777: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:47 compute-0 sudo[256756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adumldmasziwwwdtgqduthwgvjmamwmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009707.4134712-718-153077458037463/AnsiballZ_systemd_service.py'
Nov 24 18:41:47 compute-0 sudo[256756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:41:47 compute-0 python3.9[256758]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:41:48 compute-0 sudo[256756]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:48 compute-0 sudo[256909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzonpndiwrowqnosbmwpseqohqxjohow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009708.1506925-718-131088020394899/AnsiballZ_systemd_service.py'
Nov 24 18:41:48 compute-0 sudo[256909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:48 compute-0 ceph-mon[74927]: pgmap v777: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:48 compute-0 python3.9[256911]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:41:48 compute-0 sudo[256909]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:49 compute-0 sudo[257062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmgjelvjvhdiazqhaqzjdugzbuspslbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009708.8278377-718-107763164680696/AnsiballZ_systemd_service.py'
Nov 24 18:41:49 compute-0 sudo[257062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:49 compute-0 python3.9[257064]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:41:49 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v778: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:49 compute-0 sudo[257062]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:49 compute-0 sudo[257215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kacpalzabkgaizsvvmppvcmrhupakzws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009709.5839384-718-110548522085668/AnsiballZ_systemd_service.py'
Nov 24 18:41:49 compute-0 sudo[257215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:50 compute-0 python3.9[257217]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:41:50 compute-0 sudo[257215]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:50 compute-0 ceph-mon[74927]: pgmap v778: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:50 compute-0 sudo[257368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdcbseaqtaambkdmqtefncswgonvybff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009710.585677-777-10615686039218/AnsiballZ_file.py'
Nov 24 18:41:50 compute-0 sudo[257368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:51 compute-0 python3.9[257370]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:41:51 compute-0 sudo[257368]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:51 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v779: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:51 compute-0 sudo[257520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhxbftrgraeutllnpahnvkbbyclcuroh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009711.2386081-777-218354070918662/AnsiballZ_file.py'
Nov 24 18:41:51 compute-0 sudo[257520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:51 compute-0 python3.9[257522]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:41:51 compute-0 sudo[257520]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:52 compute-0 sudo[257672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyptqnggosnyqmjyijncljrikbzlchxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009711.900795-777-67567864166611/AnsiballZ_file.py'
Nov 24 18:41:52 compute-0 sudo[257672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:52 compute-0 python3.9[257674]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:41:52 compute-0 sudo[257672]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:52 compute-0 ceph-mon[74927]: pgmap v779: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:41:52 compute-0 sudo[257824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmzmpoiknzgqoosjszvedcvqudjpnzoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009712.5230665-777-232912420050898/AnsiballZ_file.py'
Nov 24 18:41:52 compute-0 sudo[257824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:52 compute-0 python3.9[257826]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:41:53 compute-0 sudo[257824]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:53 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v780: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:53 compute-0 sudo[257989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlyfnjidxcutvvaedixjowmpmtgiaqmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009713.156057-777-58893581565318/AnsiballZ_file.py'
Nov 24 18:41:53 compute-0 sudo[257989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:53 compute-0 podman[257950]: 2025-11-24 18:41:53.476716485 +0000 UTC m=+0.082066453 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller)
Nov 24 18:41:53 compute-0 python3.9[257996]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:41:53 compute-0 sudo[257989]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:54 compute-0 sudo[258155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyhjvjpdkrfctmljqhidutriuptnphus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009713.74608-777-21409988431723/AnsiballZ_file.py'
Nov 24 18:41:54 compute-0 sudo[258155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:54 compute-0 python3.9[258157]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:41:54 compute-0 sudo[258155]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:54 compute-0 sudo[258307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgsknbbsiznkbiteqhdatfejtqfaqpat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009714.351472-777-17375317143236/AnsiballZ_file.py'
Nov 24 18:41:54 compute-0 ceph-mon[74927]: pgmap v780: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:54 compute-0 sudo[258307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:54 compute-0 python3.9[258309]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:41:54 compute-0 sudo[258307]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:55 compute-0 sudo[258482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcpozdwmuajmxpsjlxkcegnavlnqzfhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009714.9915395-777-87049354773339/AnsiballZ_file.py'
Nov 24 18:41:55 compute-0 sudo[258482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:55 compute-0 podman[258433]: 2025-11-24 18:41:55.276320536 +0000 UTC m=+0.056161382 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 18:41:55 compute-0 podman[258434]: 2025-11-24 18:41:55.302552455 +0000 UTC m=+0.075978213 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 18:41:55 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v781: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:55 compute-0 python3.9[258497]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:41:55 compute-0 sudo[258482]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:55 compute-0 sudo[258651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsejttecxodaolmtckltkvgjymzylzgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009715.6131406-834-62249756007863/AnsiballZ_file.py'
Nov 24 18:41:55 compute-0 sudo[258651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:56 compute-0 python3.9[258653]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:41:56 compute-0 sudo[258651]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:56 compute-0 sudo[258803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czmmxdvkvsrmttafluirvwnmhsqyswzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009716.2597046-834-127608149804617/AnsiballZ_file.py'
Nov 24 18:41:56 compute-0 sudo[258803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:56 compute-0 ceph-mon[74927]: pgmap v781: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:56 compute-0 python3.9[258805]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:41:56 compute-0 sudo[258803]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:57 compute-0 sudo[258955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyqliphidjjauuwmevzaqccyfdbqccox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009716.9200313-834-272259841850892/AnsiballZ_file.py'
Nov 24 18:41:57 compute-0 sudo[258955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:57 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v782: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:57 compute-0 python3.9[258957]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:41:57 compute-0 sudo[258955]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:41:57 compute-0 sudo[259107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qovwmepanahhxrqsanufyytofmjubczv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009717.6019547-834-210583173046596/AnsiballZ_file.py'
Nov 24 18:41:57 compute-0 sudo[259107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:58 compute-0 python3.9[259109]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:41:58 compute-0 sudo[259107]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:58 compute-0 sudo[259259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvnwnnyggdyqwppxroqdglswdslvqfey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009718.20817-834-212232255061472/AnsiballZ_file.py'
Nov 24 18:41:58 compute-0 sudo[259259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:58 compute-0 ceph-mon[74927]: pgmap v782: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:58 compute-0 python3.9[259261]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:41:58 compute-0 sudo[259259]: pam_unix(sudo:session): session closed for user root
Nov 24 18:41:59 compute-0 sudo[259411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcxgercegqiwwcckhrctdjoegmxspjkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009718.9055035-834-226474863399097/AnsiballZ_file.py'
Nov 24 18:41:59 compute-0 sudo[259411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:41:59 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v783: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:41:59 compute-0 python3.9[259413]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:41:59 compute-0 sudo[259411]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:00 compute-0 sudo[259564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoattqqhlcpkjzqwiecnuobsixqfuudg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009720.004127-834-231660835595004/AnsiballZ_file.py'
Nov 24 18:42:00 compute-0 sudo[259564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:00 compute-0 python3.9[259566]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:42:00 compute-0 sudo[259564]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:00 compute-0 ceph-mon[74927]: pgmap v783: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:00 compute-0 sudo[259716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftjswxmerymvarzvcwydqeikvhrgijot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009720.6575823-834-30332333274093/AnsiballZ_file.py'
Nov 24 18:42:00 compute-0 sudo[259716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:01 compute-0 python3.9[259718]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:42:01 compute-0 sudo[259716]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:01 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v784: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:01 compute-0 sudo[259868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruosvhxoaymgzbaqafqmthyrgrabpqao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009721.5346766-892-280884737219757/AnsiballZ_command.py'
Nov 24 18:42:01 compute-0 sudo[259868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:01 compute-0 python3.9[259870]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:42:01 compute-0 sudo[259868]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:02 compute-0 ceph-mon[74927]: pgmap v784: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:42:02 compute-0 python3.9[260022]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 18:42:03 compute-0 sudo[260172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcqosuxarjwqtewkponqzjqopgrpnofy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009723.0778573-910-16001543331157/AnsiballZ_systemd_service.py'
Nov 24 18:42:03 compute-0 sudo[260172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:03 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v785: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:03 compute-0 python3.9[260174]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 18:42:03 compute-0 systemd[1]: Reloading.
Nov 24 18:42:03 compute-0 systemd-sysv-generator[260202]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:42:03 compute-0 systemd-rc-local-generator[260195]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:42:03 compute-0 sudo[260172]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:04 compute-0 sudo[260360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvcemfnsdkcyxjduxmznsltqbezmolar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009724.1495948-918-63835066453123/AnsiballZ_command.py'
Nov 24 18:42:04 compute-0 sudo[260360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:04 compute-0 python3.9[260362]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:42:04 compute-0 sudo[260360]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:42:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:42:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:42:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:42:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:42:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:42:04 compute-0 ceph-mon[74927]: pgmap v785: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:05 compute-0 sudo[260513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyokwgdufjccakudpqhcqlkmxaaetiwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009724.82048-918-113616819787967/AnsiballZ_command.py'
Nov 24 18:42:05 compute-0 sudo[260513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:05 compute-0 python3.9[260515]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:42:05 compute-0 sudo[260513]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:05 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v786: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 18:42:05 compute-0 sudo[260666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjqynjuozufutlhijfxbgpmbmoniaohs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009725.438889-918-158471060122438/AnsiballZ_command.py'
Nov 24 18:42:05 compute-0 sudo[260666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:05 compute-0 python3.9[260668]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:42:05 compute-0 sudo[260666]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:06 compute-0 sudo[260819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgbepddaweyearwemnyocwvxowqhrjvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009726.061523-918-39371177935551/AnsiballZ_command.py'
Nov 24 18:42:06 compute-0 sudo[260819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:06 compute-0 python3.9[260821]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:42:06 compute-0 sudo[260819]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:06 compute-0 ceph-mon[74927]: pgmap v786: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 18:42:06 compute-0 sudo[260972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnjnhwqqgzuihyctxgvknlvbvklhjpnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009726.7464628-918-236443285228546/AnsiballZ_command.py'
Nov 24 18:42:06 compute-0 sudo[260972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:07 compute-0 python3.9[260974]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:42:07 compute-0 sudo[260972]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:07 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v787: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 18:42:07 compute-0 sudo[261125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qowmlmkedbmxooqchlaouesddtmnfbcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009727.320554-918-121841980542075/AnsiballZ_command.py'
Nov 24 18:42:07 compute-0 sudo[261125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:42:07 compute-0 python3.9[261127]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:42:07 compute-0 sudo[261125]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:08 compute-0 sudo[261278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odcnjeskdjhdfupkiqucnyuvwdhiyssn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009727.9942157-918-226103341346524/AnsiballZ_command.py'
Nov 24 18:42:08 compute-0 sudo[261278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:08 compute-0 python3.9[261280]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:42:08 compute-0 sudo[261278]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:08 compute-0 ceph-mon[74927]: pgmap v787: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 18:42:08 compute-0 sudo[261431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-noxapumkqcrnzjdilplidvvtpxabqbje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009728.5647984-918-227215111111606/AnsiballZ_command.py'
Nov 24 18:42:08 compute-0 sudo[261431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:09 compute-0 python3.9[261433]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 18:42:09 compute-0 sudo[261431]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:09 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v788: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 18:42:10 compute-0 sudo[261584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdzclwsakxcxmjjlylsjameptqqhpnsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009729.9947598-997-25996708568333/AnsiballZ_file.py'
Nov 24 18:42:10 compute-0 sudo[261584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:10 compute-0 python3.9[261586]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:42:10 compute-0 sudo[261584]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:10 compute-0 ceph-mon[74927]: pgmap v788: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 18:42:10 compute-0 sudo[261736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvwnsmewwltozgwpoanycuuselrgdnfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009730.588139-997-104252912614857/AnsiballZ_file.py'
Nov 24 18:42:10 compute-0 sudo[261736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:10 compute-0 python3.9[261738]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:42:11 compute-0 sudo[261736]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:11 compute-0 sudo[261888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kevqynnwsekzsvmimcpkizxsrhewpndb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009731.152448-997-1485637457332/AnsiballZ_file.py'
Nov 24 18:42:11 compute-0 sudo[261888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:11 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v789: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 18:42:11 compute-0 python3.9[261890]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:42:11 compute-0 sudo[261888]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:12 compute-0 sudo[262040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lprcvnrbnylzwsfhhnyrwghfymbvqexy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009731.8258445-1019-165542503748885/AnsiballZ_file.py'
Nov 24 18:42:12 compute-0 sudo[262040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:12 compute-0 python3.9[262042]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:42:12 compute-0 sudo[262040]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:12 compute-0 ceph-mon[74927]: pgmap v789: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 18:42:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:42:12 compute-0 sudo[262192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzjlxrfgzfnbrndxhlffootripagsqvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009732.4735398-1019-196602684090780/AnsiballZ_file.py'
Nov 24 18:42:12 compute-0 sudo[262192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:13 compute-0 python3.9[262194]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:42:13 compute-0 sudo[262192]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:13 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v790: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 18:42:13 compute-0 sudo[262344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwlxevvunjlmccshqqsvazdzcwfisrfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009733.1799088-1019-48836847769881/AnsiballZ_file.py'
Nov 24 18:42:13 compute-0 sudo[262344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:13 compute-0 python3.9[262346]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:42:13 compute-0 sudo[262344]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:14 compute-0 sudo[262496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iefyseajwyovfmzzaovlqbxpddrbpbwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009733.8402326-1019-2440554880793/AnsiballZ_file.py'
Nov 24 18:42:14 compute-0 sudo[262496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:14 compute-0 python3.9[262498]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:42:14 compute-0 sudo[262496]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:14 compute-0 sudo[262575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:42:14 compute-0 sudo[262575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:42:14 compute-0 sudo[262575]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:14 compute-0 sudo[262623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:42:14 compute-0 sudo[262623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:42:14 compute-0 sudo[262623]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:14 compute-0 sudo[262651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:42:14 compute-0 sudo[262651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:42:14 compute-0 sudo[262651]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:14 compute-0 sudo[262746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cynvcbnytnojnqyituaikeqruizifcgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009734.422789-1019-266677920265140/AnsiballZ_file.py'
Nov 24 18:42:14 compute-0 sudo[262746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:14 compute-0 sudo[262701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 24 18:42:14 compute-0 sudo[262701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:42:14 compute-0 ceph-mon[74927]: pgmap v790: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 18:42:14 compute-0 python3.9[262748]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:42:14 compute-0 sudo[262746]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:15 compute-0 podman[262871]: 2025-11-24 18:42:15.122194465 +0000 UTC m=+0.049653876 container exec 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 24 18:42:15 compute-0 podman[262871]: 2025-11-24 18:42:15.216200023 +0000 UTC m=+0.143659464 container exec_died 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 18:42:15 compute-0 sudo[263017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qouqkcrszclxjapffhmkzlbiszzbifvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009735.0440185-1019-18123234104507/AnsiballZ_file.py'
Nov 24 18:42:15 compute-0 sudo[263017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:15 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v791: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 18:42:15 compute-0 python3.9[263022]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:42:15 compute-0 sudo[263017]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:15 compute-0 sudo[262701]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:15 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:42:15 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:42:15 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:42:15 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:42:15 compute-0 sudo[263232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:42:15 compute-0 sudo[263232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:42:15 compute-0 sudo[263232]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:15 compute-0 sudo[263281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:42:15 compute-0 sudo[263281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:42:15 compute-0 sudo[263281]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:15 compute-0 sudo[263331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmpkkmuxzbrrujsltwhvaejbevmupeup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009735.6282253-1019-102082608464445/AnsiballZ_file.py'
Nov 24 18:42:15 compute-0 sudo[263331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:15 compute-0 sudo[263333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:42:15 compute-0 sudo[263333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:42:15 compute-0 sudo[263333]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:16 compute-0 sudo[263360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 18:42:16 compute-0 sudo[263360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:42:16 compute-0 python3.9[263346]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:42:16 compute-0 sudo[263331]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:16 compute-0 sudo[263360]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:16 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:42:16 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:42:16 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:42:16 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:42:16 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:42:16 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:42:16 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 7f1d48bf-401e-4271-b064-4466b7a62d19 does not exist
Nov 24 18:42:16 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev cacdd1c3-1346-4194-be4f-33d6d16d8e7f does not exist
Nov 24 18:42:16 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 6366fb20-a392-4a5d-9f54-5983066d2d96 does not exist
Nov 24 18:42:16 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 18:42:16 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:42:16 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 18:42:16 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:42:16 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:42:16 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:42:16 compute-0 sudo[263439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:42:16 compute-0 sudo[263439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:42:16 compute-0 sudo[263439]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:16 compute-0 sudo[263464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:42:16 compute-0 sudo[263464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:42:16 compute-0 sudo[263464]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:16 compute-0 sudo[263489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:42:16 compute-0 sudo[263489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:42:16 compute-0 sudo[263489]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:16 compute-0 sudo[263514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 18:42:16 compute-0 sudo[263514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:42:16 compute-0 ceph-mon[74927]: pgmap v791: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 18:42:16 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:42:16 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:42:16 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:42:16 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:42:16 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:42:16 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:42:16 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:42:16 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:42:17 compute-0 podman[263580]: 2025-11-24 18:42:17.097291906 +0000 UTC m=+0.041055532 container create 3b5d68748547c315d4145b6a4903801b25c89bbbb299bfbd8e1597303b5f6236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rubin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 24 18:42:17 compute-0 systemd[1]: Started libpod-conmon-3b5d68748547c315d4145b6a4903801b25c89bbbb299bfbd8e1597303b5f6236.scope.
Nov 24 18:42:17 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:42:17 compute-0 podman[263580]: 2025-11-24 18:42:17.078322514 +0000 UTC m=+0.022086130 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:42:17 compute-0 podman[263580]: 2025-11-24 18:42:17.179324346 +0000 UTC m=+0.123087942 container init 3b5d68748547c315d4145b6a4903801b25c89bbbb299bfbd8e1597303b5f6236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 24 18:42:17 compute-0 podman[263580]: 2025-11-24 18:42:17.190361591 +0000 UTC m=+0.134125187 container start 3b5d68748547c315d4145b6a4903801b25c89bbbb299bfbd8e1597303b5f6236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rubin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 18:42:17 compute-0 podman[263580]: 2025-11-24 18:42:17.193643662 +0000 UTC m=+0.137407268 container attach 3b5d68748547c315d4145b6a4903801b25c89bbbb299bfbd8e1597303b5f6236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:42:17 compute-0 vibrant_rubin[263596]: 167 167
Nov 24 18:42:17 compute-0 systemd[1]: libpod-3b5d68748547c315d4145b6a4903801b25c89bbbb299bfbd8e1597303b5f6236.scope: Deactivated successfully.
Nov 24 18:42:17 compute-0 podman[263580]: 2025-11-24 18:42:17.196957765 +0000 UTC m=+0.140721431 container died 3b5d68748547c315d4145b6a4903801b25c89bbbb299bfbd8e1597303b5f6236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rubin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:42:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-65a9f1068ac774feea142d8707b582cf6f5609770c753331f44eb763be292fa4-merged.mount: Deactivated successfully.
Nov 24 18:42:17 compute-0 podman[263580]: 2025-11-24 18:42:17.251320947 +0000 UTC m=+0.195084573 container remove 3b5d68748547c315d4145b6a4903801b25c89bbbb299bfbd8e1597303b5f6236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rubin, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:42:17 compute-0 systemd[1]: libpod-conmon-3b5d68748547c315d4145b6a4903801b25c89bbbb299bfbd8e1597303b5f6236.scope: Deactivated successfully.
Nov 24 18:42:17 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v792: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:17 compute-0 podman[263620]: 2025-11-24 18:42:17.445547086 +0000 UTC m=+0.040295983 container create 949789d9bd278eb30418bbaf05638c36cf855e75346508b0ce68897a5deb8d83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_rosalind, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:42:17 compute-0 systemd[1]: Started libpod-conmon-949789d9bd278eb30418bbaf05638c36cf855e75346508b0ce68897a5deb8d83.scope.
Nov 24 18:42:17 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:42:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce9aefbe4ff8aa87020c4ced85062f3cb35c90f4f458c3d7fe0a4e19eb3bb100/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:42:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce9aefbe4ff8aa87020c4ced85062f3cb35c90f4f458c3d7fe0a4e19eb3bb100/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:42:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce9aefbe4ff8aa87020c4ced85062f3cb35c90f4f458c3d7fe0a4e19eb3bb100/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:42:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce9aefbe4ff8aa87020c4ced85062f3cb35c90f4f458c3d7fe0a4e19eb3bb100/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:42:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce9aefbe4ff8aa87020c4ced85062f3cb35c90f4f458c3d7fe0a4e19eb3bb100/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:42:17 compute-0 podman[263620]: 2025-11-24 18:42:17.431307772 +0000 UTC m=+0.026056689 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:42:17 compute-0 podman[263620]: 2025-11-24 18:42:17.530184731 +0000 UTC m=+0.124933628 container init 949789d9bd278eb30418bbaf05638c36cf855e75346508b0ce68897a5deb8d83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:42:17 compute-0 podman[263620]: 2025-11-24 18:42:17.541250286 +0000 UTC m=+0.135999183 container start 949789d9bd278eb30418bbaf05638c36cf855e75346508b0ce68897a5deb8d83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 24 18:42:17 compute-0 podman[263620]: 2025-11-24 18:42:17.544815045 +0000 UTC m=+0.139563942 container attach 949789d9bd278eb30418bbaf05638c36cf855e75346508b0ce68897a5deb8d83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_rosalind, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:42:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:42:18 compute-0 dreamy_rosalind[263637]: --> passed data devices: 0 physical, 3 LVM
Nov 24 18:42:18 compute-0 dreamy_rosalind[263637]: --> relative data size: 1.0
Nov 24 18:42:18 compute-0 dreamy_rosalind[263637]: --> All data devices are unavailable
Nov 24 18:42:18 compute-0 systemd[1]: libpod-949789d9bd278eb30418bbaf05638c36cf855e75346508b0ce68897a5deb8d83.scope: Deactivated successfully.
Nov 24 18:42:18 compute-0 podman[263620]: 2025-11-24 18:42:18.576532344 +0000 UTC m=+1.171281251 container died 949789d9bd278eb30418bbaf05638c36cf855e75346508b0ce68897a5deb8d83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:42:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce9aefbe4ff8aa87020c4ced85062f3cb35c90f4f458c3d7fe0a4e19eb3bb100-merged.mount: Deactivated successfully.
Nov 24 18:42:18 compute-0 podman[263620]: 2025-11-24 18:42:18.631107421 +0000 UTC m=+1.225856318 container remove 949789d9bd278eb30418bbaf05638c36cf855e75346508b0ce68897a5deb8d83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_rosalind, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:42:18 compute-0 systemd[1]: libpod-conmon-949789d9bd278eb30418bbaf05638c36cf855e75346508b0ce68897a5deb8d83.scope: Deactivated successfully.
Nov 24 18:42:18 compute-0 sudo[263514]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:18 compute-0 sudo[263678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:42:18 compute-0 sudo[263678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:42:18 compute-0 sudo[263678]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:18 compute-0 ceph-mon[74927]: pgmap v792: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:18 compute-0 sudo[263703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:42:18 compute-0 sudo[263703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:42:18 compute-0 sudo[263703]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:18 compute-0 sudo[263728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:42:18 compute-0 sudo[263728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:42:18 compute-0 sudo[263728]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:18 compute-0 sudo[263753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- lvm list --format json
Nov 24 18:42:18 compute-0 sudo[263753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:42:19 compute-0 podman[263817]: 2025-11-24 18:42:19.145419693 +0000 UTC m=+0.043469573 container create d4e6db603973ff7748814b276e8ba3504e1bcf50b179f592e12c6f6613df348f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_snyder, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:42:19 compute-0 systemd[1]: Started libpod-conmon-d4e6db603973ff7748814b276e8ba3504e1bcf50b179f592e12c6f6613df348f.scope.
Nov 24 18:42:19 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:42:19 compute-0 podman[263817]: 2025-11-24 18:42:19.208948112 +0000 UTC m=+0.106998022 container init d4e6db603973ff7748814b276e8ba3504e1bcf50b179f592e12c6f6613df348f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_snyder, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:42:19 compute-0 podman[263817]: 2025-11-24 18:42:19.214551572 +0000 UTC m=+0.112601472 container start d4e6db603973ff7748814b276e8ba3504e1bcf50b179f592e12c6f6613df348f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_snyder, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:42:19 compute-0 magical_snyder[263833]: 167 167
Nov 24 18:42:19 compute-0 podman[263817]: 2025-11-24 18:42:19.219273639 +0000 UTC m=+0.117323579 container attach d4e6db603973ff7748814b276e8ba3504e1bcf50b179f592e12c6f6613df348f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_snyder, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:42:19 compute-0 systemd[1]: libpod-d4e6db603973ff7748814b276e8ba3504e1bcf50b179f592e12c6f6613df348f.scope: Deactivated successfully.
Nov 24 18:42:19 compute-0 podman[263817]: 2025-11-24 18:42:19.220665164 +0000 UTC m=+0.118715054 container died d4e6db603973ff7748814b276e8ba3504e1bcf50b179f592e12c6f6613df348f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_snyder, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:42:19 compute-0 podman[263817]: 2025-11-24 18:42:19.125412765 +0000 UTC m=+0.023462705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:42:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8c9a5e19b7c3637e9137baa1fa2b57999dc70d33b483a567941221dcafb7758-merged.mount: Deactivated successfully.
Nov 24 18:42:19 compute-0 podman[263817]: 2025-11-24 18:42:19.257396237 +0000 UTC m=+0.155446127 container remove d4e6db603973ff7748814b276e8ba3504e1bcf50b179f592e12c6f6613df348f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_snyder, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 24 18:42:19 compute-0 systemd[1]: libpod-conmon-d4e6db603973ff7748814b276e8ba3504e1bcf50b179f592e12c6f6613df348f.scope: Deactivated successfully.
Nov 24 18:42:19 compute-0 podman[263856]: 2025-11-24 18:42:19.395753488 +0000 UTC m=+0.036348465 container create d7a11960a965612fef298c6e50af4ed5a494f6ec922b956ecf7b7f6c0680f10a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_noyce, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:42:19 compute-0 systemd[1]: Started libpod-conmon-d7a11960a965612fef298c6e50af4ed5a494f6ec922b956ecf7b7f6c0680f10a.scope.
Nov 24 18:42:19 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v793: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:19 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:42:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a7eb860d187d3d4983ec7e76176419150222c504febb01c53fbcaffac99b2fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:42:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a7eb860d187d3d4983ec7e76176419150222c504febb01c53fbcaffac99b2fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:42:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a7eb860d187d3d4983ec7e76176419150222c504febb01c53fbcaffac99b2fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:42:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a7eb860d187d3d4983ec7e76176419150222c504febb01c53fbcaffac99b2fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:42:19 compute-0 podman[263856]: 2025-11-24 18:42:19.475148523 +0000 UTC m=+0.115743520 container init d7a11960a965612fef298c6e50af4ed5a494f6ec922b956ecf7b7f6c0680f10a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_noyce, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 18:42:19 compute-0 podman[263856]: 2025-11-24 18:42:19.380912629 +0000 UTC m=+0.021507626 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:42:19 compute-0 podman[263856]: 2025-11-24 18:42:19.488336151 +0000 UTC m=+0.128931128 container start d7a11960a965612fef298c6e50af4ed5a494f6ec922b956ecf7b7f6c0680f10a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_noyce, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 24 18:42:19 compute-0 podman[263856]: 2025-11-24 18:42:19.495171741 +0000 UTC m=+0.135766738 container attach d7a11960a965612fef298c6e50af4ed5a494f6ec922b956ecf7b7f6c0680f10a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Nov 24 18:42:20 compute-0 amazing_noyce[263872]: {
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:     "0": [
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:         {
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:             "devices": [
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:                 "/dev/loop3"
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:             ],
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:             "lv_name": "ceph_lv0",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:             "lv_size": "21470642176",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:             "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:             "name": "ceph_lv0",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:             "tags": {
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:                 "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:                 "ceph.cluster_name": "ceph",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:                 "ceph.crush_device_class": "",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:                 "ceph.encrypted": "0",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:                 "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:                 "ceph.osd_id": "0",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:                 "ceph.type": "block",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:                 "ceph.vdo": "0"
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:             },
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:             "type": "block",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:             "vg_name": "ceph_vg0"
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:         }
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:     ],
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:     "1": [
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:         {
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:             "devices": [
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:                 "/dev/loop4"
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:             ],
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:             "lv_name": "ceph_lv1",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:             "lv_size": "21470642176",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:             "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:             "name": "ceph_lv1",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:             "tags": {
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:                 "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:                 "ceph.cluster_name": "ceph",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:                 "ceph.crush_device_class": "",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:                 "ceph.encrypted": "0",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:                 "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:                 "ceph.osd_id": "1",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:                 "ceph.type": "block",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:                 "ceph.vdo": "0"
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:             },
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:             "type": "block",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:             "vg_name": "ceph_vg1"
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:         }
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:     ],
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:     "2": [
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:         {
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:             "devices": [
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:                 "/dev/loop5"
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:             ],
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:             "lv_name": "ceph_lv2",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:             "lv_size": "21470642176",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:             "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:             "name": "ceph_lv2",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:             "tags": {
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:                 "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:                 "ceph.cluster_name": "ceph",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:                 "ceph.crush_device_class": "",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:                 "ceph.encrypted": "0",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:                 "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:                 "ceph.osd_id": "2",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:                 "ceph.type": "block",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:                 "ceph.vdo": "0"
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:             },
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:             "type": "block",
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:             "vg_name": "ceph_vg2"
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:         }
Nov 24 18:42:20 compute-0 amazing_noyce[263872]:     ]
Nov 24 18:42:20 compute-0 amazing_noyce[263872]: }
Nov 24 18:42:20 compute-0 systemd[1]: libpod-d7a11960a965612fef298c6e50af4ed5a494f6ec922b956ecf7b7f6c0680f10a.scope: Deactivated successfully.
Nov 24 18:42:20 compute-0 podman[263856]: 2025-11-24 18:42:20.243861651 +0000 UTC m=+0.884456638 container died d7a11960a965612fef298c6e50af4ed5a494f6ec922b956ecf7b7f6c0680f10a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_noyce, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:42:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a7eb860d187d3d4983ec7e76176419150222c504febb01c53fbcaffac99b2fa-merged.mount: Deactivated successfully.
Nov 24 18:42:20 compute-0 podman[263856]: 2025-11-24 18:42:20.297029593 +0000 UTC m=+0.937624560 container remove d7a11960a965612fef298c6e50af4ed5a494f6ec922b956ecf7b7f6c0680f10a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_noyce, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:42:20 compute-0 systemd[1]: libpod-conmon-d7a11960a965612fef298c6e50af4ed5a494f6ec922b956ecf7b7f6c0680f10a.scope: Deactivated successfully.
Nov 24 18:42:20 compute-0 sudo[263753]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:20 compute-0 sudo[263894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:42:20 compute-0 sudo[263894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:42:20 compute-0 sudo[263894]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:20 compute-0 sudo[263919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:42:20 compute-0 sudo[263919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:42:20 compute-0 sudo[263919]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:20 compute-0 sudo[263944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:42:20 compute-0 sudo[263944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:42:20 compute-0 sudo[263944]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:20 compute-0 sudo[263969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- raw list --format json
Nov 24 18:42:20 compute-0 sudo[263969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:42:20 compute-0 ceph-mon[74927]: pgmap v793: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:20 compute-0 podman[264035]: 2025-11-24 18:42:20.820549453 +0000 UTC m=+0.037204636 container create 1afec1a5641165401c6dd150d4a2973812343ebe88c82613a3294d545338fbc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:42:20 compute-0 systemd[1]: Started libpod-conmon-1afec1a5641165401c6dd150d4a2973812343ebe88c82613a3294d545338fbc2.scope.
Nov 24 18:42:20 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:42:20 compute-0 podman[264035]: 2025-11-24 18:42:20.891044916 +0000 UTC m=+0.107700119 container init 1afec1a5641165401c6dd150d4a2973812343ebe88c82613a3294d545338fbc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_keldysh, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:42:20 compute-0 podman[264035]: 2025-11-24 18:42:20.896144243 +0000 UTC m=+0.112799426 container start 1afec1a5641165401c6dd150d4a2973812343ebe88c82613a3294d545338fbc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_keldysh, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 18:42:20 compute-0 podman[264035]: 2025-11-24 18:42:20.804285469 +0000 UTC m=+0.020940682 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:42:20 compute-0 vigilant_keldysh[264051]: 167 167
Nov 24 18:42:20 compute-0 systemd[1]: libpod-1afec1a5641165401c6dd150d4a2973812343ebe88c82613a3294d545338fbc2.scope: Deactivated successfully.
Nov 24 18:42:20 compute-0 conmon[264051]: conmon 1afec1a5641165401c6d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1afec1a5641165401c6dd150d4a2973812343ebe88c82613a3294d545338fbc2.scope/container/memory.events
Nov 24 18:42:20 compute-0 podman[264035]: 2025-11-24 18:42:20.902925401 +0000 UTC m=+0.119580604 container attach 1afec1a5641165401c6dd150d4a2973812343ebe88c82613a3294d545338fbc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 18:42:20 compute-0 podman[264035]: 2025-11-24 18:42:20.903160577 +0000 UTC m=+0.119815760 container died 1afec1a5641165401c6dd150d4a2973812343ebe88c82613a3294d545338fbc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:42:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fa2f869cfda5624a7ee04908005560d278295e9b273a996ca2ec1a4d9759adb-merged.mount: Deactivated successfully.
Nov 24 18:42:20 compute-0 podman[264035]: 2025-11-24 18:42:20.941290535 +0000 UTC m=+0.157945728 container remove 1afec1a5641165401c6dd150d4a2973812343ebe88c82613a3294d545338fbc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 24 18:42:20 compute-0 systemd[1]: libpod-conmon-1afec1a5641165401c6dd150d4a2973812343ebe88c82613a3294d545338fbc2.scope: Deactivated successfully.
Nov 24 18:42:21 compute-0 podman[264074]: 2025-11-24 18:42:21.102960576 +0000 UTC m=+0.040839727 container create 78150cf2e376f227456599edbfccce341b069fe61975540abb16bcfff49099e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 18:42:21 compute-0 systemd[1]: Started libpod-conmon-78150cf2e376f227456599edbfccce341b069fe61975540abb16bcfff49099e2.scope.
Nov 24 18:42:21 compute-0 podman[264074]: 2025-11-24 18:42:21.081955944 +0000 UTC m=+0.019835115 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:42:21 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:42:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6743cb0021fd12302919e7cbb915c5c14e8706d47a1e60fd48f8e20b7acbeff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:42:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6743cb0021fd12302919e7cbb915c5c14e8706d47a1e60fd48f8e20b7acbeff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:42:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6743cb0021fd12302919e7cbb915c5c14e8706d47a1e60fd48f8e20b7acbeff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:42:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6743cb0021fd12302919e7cbb915c5c14e8706d47a1e60fd48f8e20b7acbeff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:42:21 compute-0 podman[264074]: 2025-11-24 18:42:21.239310857 +0000 UTC m=+0.177190028 container init 78150cf2e376f227456599edbfccce341b069fe61975540abb16bcfff49099e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lederberg, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 18:42:21 compute-0 podman[264074]: 2025-11-24 18:42:21.246785133 +0000 UTC m=+0.184664284 container start 78150cf2e376f227456599edbfccce341b069fe61975540abb16bcfff49099e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:42:21 compute-0 podman[264074]: 2025-11-24 18:42:21.26517251 +0000 UTC m=+0.203051691 container attach 78150cf2e376f227456599edbfccce341b069fe61975540abb16bcfff49099e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:42:21 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v794: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:21 compute-0 sudo[264221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cllclqlixtrajbcwyckpyozajcswjost ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009741.1157036-1208-33357345657886/AnsiballZ_getent.py'
Nov 24 18:42:21 compute-0 sudo[264221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:21 compute-0 python3.9[264223]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 24 18:42:21 compute-0 sudo[264221]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:22 compute-0 infallible_lederberg[264134]: {
Nov 24 18:42:22 compute-0 infallible_lederberg[264134]:     "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 18:42:22 compute-0 infallible_lederberg[264134]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:42:22 compute-0 infallible_lederberg[264134]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 18:42:22 compute-0 infallible_lederberg[264134]:         "osd_id": 0,
Nov 24 18:42:22 compute-0 infallible_lederberg[264134]:         "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:42:22 compute-0 infallible_lederberg[264134]:         "type": "bluestore"
Nov 24 18:42:22 compute-0 infallible_lederberg[264134]:     },
Nov 24 18:42:22 compute-0 infallible_lederberg[264134]:     "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 18:42:22 compute-0 infallible_lederberg[264134]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:42:22 compute-0 infallible_lederberg[264134]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 18:42:22 compute-0 infallible_lederberg[264134]:         "osd_id": 1,
Nov 24 18:42:22 compute-0 infallible_lederberg[264134]:         "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:42:22 compute-0 infallible_lederberg[264134]:         "type": "bluestore"
Nov 24 18:42:22 compute-0 infallible_lederberg[264134]:     },
Nov 24 18:42:22 compute-0 infallible_lederberg[264134]:     "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 18:42:22 compute-0 infallible_lederberg[264134]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:42:22 compute-0 infallible_lederberg[264134]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 18:42:22 compute-0 infallible_lederberg[264134]:         "osd_id": 2,
Nov 24 18:42:22 compute-0 infallible_lederberg[264134]:         "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:42:22 compute-0 infallible_lederberg[264134]:         "type": "bluestore"
Nov 24 18:42:22 compute-0 infallible_lederberg[264134]:     }
Nov 24 18:42:22 compute-0 infallible_lederberg[264134]: }
Nov 24 18:42:22 compute-0 systemd[1]: libpod-78150cf2e376f227456599edbfccce341b069fe61975540abb16bcfff49099e2.scope: Deactivated successfully.
Nov 24 18:42:22 compute-0 podman[264074]: 2025-11-24 18:42:22.254734001 +0000 UTC m=+1.192613152 container died 78150cf2e376f227456599edbfccce341b069fe61975540abb16bcfff49099e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lederberg, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:42:22 compute-0 systemd[1]: libpod-78150cf2e376f227456599edbfccce341b069fe61975540abb16bcfff49099e2.scope: Consumed 1.012s CPU time.
Nov 24 18:42:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6743cb0021fd12302919e7cbb915c5c14e8706d47a1e60fd48f8e20b7acbeff-merged.mount: Deactivated successfully.
Nov 24 18:42:22 compute-0 sudo[264409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lohcrzalsueitmzerqqibsfjwanppgkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009741.8934069-1216-254241320722298/AnsiballZ_group.py'
Nov 24 18:42:22 compute-0 sudo[264409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:22 compute-0 podman[264074]: 2025-11-24 18:42:22.31178602 +0000 UTC m=+1.249665171 container remove 78150cf2e376f227456599edbfccce341b069fe61975540abb16bcfff49099e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Nov 24 18:42:22 compute-0 systemd[1]: libpod-conmon-78150cf2e376f227456599edbfccce341b069fe61975540abb16bcfff49099e2.scope: Deactivated successfully.
Nov 24 18:42:22 compute-0 sudo[263969]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:42:22 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:42:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:42:22 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:42:22 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 5e4e65c7-8e6b-42df-ab6f-24c7dcc32b3a does not exist
Nov 24 18:42:22 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 0b772f2a-8585-4528-bb59-4b0c8bbeffe7 does not exist
Nov 24 18:42:22 compute-0 sudo[264417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:42:22 compute-0 sudo[264417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:42:22 compute-0 sudo[264417]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:22 compute-0 sudo[264442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:42:22 compute-0 sudo[264442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:42:22 compute-0 sudo[264442]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:22 compute-0 python3.9[264416]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 18:42:22 compute-0 groupadd[264467]: group added to /etc/group: name=nova, GID=42436
Nov 24 18:42:22 compute-0 groupadd[264467]: group added to /etc/gshadow: name=nova
Nov 24 18:42:22 compute-0 groupadd[264467]: new group: name=nova, GID=42436
Nov 24 18:42:22 compute-0 sudo[264409]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:42:22.734 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:42:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:42:22.735 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:42:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:42:22.735 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:42:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:42:23 compute-0 ceph-mon[74927]: pgmap v794: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:23 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:42:23 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:42:23 compute-0 sudo[264622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbfcifwdpqavkdoewnyatldgnjevavye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009742.8564777-1224-2391848795750/AnsiballZ_user.py'
Nov 24 18:42:23 compute-0 sudo[264622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:23 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v795: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:23 compute-0 python3.9[264624]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 24 18:42:23 compute-0 useradd[264627]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Nov 24 18:42:23 compute-0 useradd[264627]: add 'nova' to group 'libvirt'
Nov 24 18:42:23 compute-0 useradd[264627]: add 'nova' to shadow group 'libvirt'
Nov 24 18:42:23 compute-0 podman[264626]: 2025-11-24 18:42:23.717700706 +0000 UTC m=+0.088704788 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller)
Nov 24 18:42:23 compute-0 sudo[264622]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:24 compute-0 sshd-session[264684]: Accepted publickey for zuul from 192.168.122.30 port 37700 ssh2: ECDSA SHA256:jrbRhTO0vQ5011EeK1ZbrK2vK+fcQyIDL9kUBqDHBYY
Nov 24 18:42:24 compute-0 systemd-logind[822]: New session 53 of user zuul.
Nov 24 18:42:24 compute-0 systemd[1]: Started Session 53 of User zuul.
Nov 24 18:42:24 compute-0 sshd-session[264684]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 18:42:24 compute-0 sshd-session[264687]: Received disconnect from 192.168.122.30 port 37700:11: disconnected by user
Nov 24 18:42:24 compute-0 sshd-session[264687]: Disconnected from user zuul 192.168.122.30 port 37700
Nov 24 18:42:24 compute-0 sshd-session[264684]: pam_unix(sshd:session): session closed for user zuul
Nov 24 18:42:24 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Nov 24 18:42:24 compute-0 systemd-logind[822]: Session 53 logged out. Waiting for processes to exit.
Nov 24 18:42:24 compute-0 systemd-logind[822]: Removed session 53.
Nov 24 18:42:25 compute-0 ceph-mon[74927]: pgmap v795: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:25 compute-0 python3.9[264837]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:42:25 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v796: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:25 compute-0 podman[264932]: 2025-11-24 18:42:25.761293879 +0000 UTC m=+0.049162684 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 18:42:25 compute-0 podman[264933]: 2025-11-24 18:42:25.781734187 +0000 UTC m=+0.064348221 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 24 18:42:25 compute-0 python3.9[264982]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764009744.9251544-1249-275735206591146/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:42:26 compute-0 python3.9[265147]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:42:27 compute-0 python3.9[265223]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:42:27 compute-0 ceph-mon[74927]: pgmap v796: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:27 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v797: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:42:27 compute-0 python3.9[265373]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:42:28 compute-0 python3.9[265494]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764009747.2194283-1249-72392914024348/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:42:28 compute-0 ceph-mon[74927]: pgmap v797: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:28 compute-0 python3.9[265644]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:42:29 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v798: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:29 compute-0 python3.9[265765]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764009748.456035-1249-36468422848022/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:42:30 compute-0 python3.9[265915]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:42:30 compute-0 ceph-mon[74927]: pgmap v798: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:30 compute-0 python3.9[266036]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764009749.6552413-1249-182168737992786/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:42:31 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v799: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:31 compute-0 python3.9[266186]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:42:32 compute-0 python3.9[266307]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764009751.1491458-1249-104742737783097/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:42:32 compute-0 ceph-mon[74927]: pgmap v799: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:32 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:42:32 compute-0 sudo[266457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwilqsudtlqgkxgixrmdbmxdhhppetqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009752.5598228-1332-45716360555677/AnsiballZ_file.py'
Nov 24 18:42:32 compute-0 sudo[266457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:33 compute-0 python3.9[266459]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:42:33 compute-0 sudo[266457]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:33 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v800: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:33 compute-0 sudo[266609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqawqvvvtsiinumxwwxysocmvxgyzhvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009753.4300022-1340-276543253211331/AnsiballZ_copy.py'
Nov 24 18:42:33 compute-0 sudo[266609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:33 compute-0 python3.9[266611]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:42:33 compute-0 sudo[266609]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:34 compute-0 sudo[266761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpvlezkodecrtbggzapojwnuzdgzgvhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009754.1588795-1348-110563540888667/AnsiballZ_stat.py'
Nov 24 18:42:34 compute-0 sudo[266761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:34 compute-0 ceph-mon[74927]: pgmap v800: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:42:34
Nov 24 18:42:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:42:34 compute-0 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 18:42:34 compute-0 ceph-mgr[75218]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'images', '.mgr', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'vms']
Nov 24 18:42:34 compute-0 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 18:42:34 compute-0 python3.9[266763]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:42:34 compute-0 sudo[266761]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:42:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:42:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:42:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:42:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:42:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:42:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:42:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:42:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:42:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:42:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:42:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:42:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:42:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:42:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:42:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:42:35 compute-0 sudo[266913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owpfnilrgrecnzopkjgpopguhajaraii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009754.7791407-1356-268345640860651/AnsiballZ_stat.py'
Nov 24 18:42:35 compute-0 sudo[266913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:35 compute-0 python3.9[266915]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:42:35 compute-0 sudo[266913]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:35 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v801: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:35 compute-0 sudo[267036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbdhjoanunrdosngzjwjjejhronfwqgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009754.7791407-1356-268345640860651/AnsiballZ_copy.py'
Nov 24 18:42:35 compute-0 sudo[267036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:35 compute-0 python3.9[267038]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764009754.7791407-1356-268345640860651/.source _original_basename=.nuuue61l follow=False checksum=ab11e1f197d206cb17585669c45c8e90deecfff1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 24 18:42:35 compute-0 sudo[267036]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:36 compute-0 python3.9[267190]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:42:36 compute-0 ceph-mon[74927]: pgmap v801: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:37 compute-0 python3.9[267342]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:42:37 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v802: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:37 compute-0 python3.9[267463]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764009756.7429729-1382-128211473841834/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:42:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:42:38 compute-0 python3.9[267613]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 18:42:38 compute-0 ceph-mon[74927]: pgmap v802: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:38 compute-0 python3.9[267734]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764009757.9065819-1397-138364876570183/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 18:42:39 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v803: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:39 compute-0 sudo[267884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbjhzdgbosmfekrhoxxifkettqqdpoks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009759.2352533-1414-29136942042706/AnsiballZ_container_config_data.py'
Nov 24 18:42:39 compute-0 sudo[267884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:39 compute-0 python3.9[267886]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 24 18:42:39 compute-0 sudo[267884]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:40 compute-0 sudo[268036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxysesitwjkuqwsqiguohjclkyioovjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009759.9320047-1423-153562666377903/AnsiballZ_container_config_hash.py'
Nov 24 18:42:40 compute-0 sudo[268036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:40 compute-0 python3.9[268038]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 24 18:42:40 compute-0 sudo[268036]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:40 compute-0 ceph-mon[74927]: pgmap v803: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:41 compute-0 sudo[268188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snrykkwwppbvsxzobifpolfsppwhbisp ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764009760.7177875-1433-255740068593992/AnsiballZ_edpm_container_manage.py'
Nov 24 18:42:41 compute-0 sudo[268188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:41 compute-0 python3[268190]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 24 18:42:41 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v804: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:42 compute-0 ceph-mon[74927]: pgmap v804: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:42:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:42:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:42:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:42:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:42:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:42:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:42:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:42:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:42:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:42:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:42:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:42:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:42:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 18:42:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:42:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:42:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:42:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 18:42:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:42:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 18:42:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:42:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:42:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:42:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 18:42:43 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v805: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:44 compute-0 ceph-mon[74927]: pgmap v805: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:45 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v806: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:46 compute-0 ceph-mon[74927]: pgmap v806: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:47 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v807: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:42:48 compute-0 ceph-mon[74927]: pgmap v807: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:49 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v808: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:50 compute-0 podman[268205]: 2025-11-24 18:42:50.157342865 +0000 UTC m=+8.786662054 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 24 18:42:50 compute-0 podman[268288]: 2025-11-24 18:42:50.306381962 +0000 UTC m=+0.049698767 container create 5e27af85292a9b40c3e4241abdb9b05f5a155fee134dfa070b318e923dd00f66 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 24 18:42:50 compute-0 podman[268288]: 2025-11-24 18:42:50.279423731 +0000 UTC m=+0.022740596 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 24 18:42:50 compute-0 python3[268190]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 24 18:42:50 compute-0 sudo[268188]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:50 compute-0 sudo[268475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrqakqymutudpqseyvydfbdjaguddsqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009770.641931-1441-247218673924460/AnsiballZ_stat.py'
Nov 24 18:42:50 compute-0 sudo[268475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:51 compute-0 ceph-mon[74927]: pgmap v808: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:51 compute-0 python3.9[268477]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:42:51 compute-0 sudo[268475]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:51 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v809: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:52 compute-0 sudo[268629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fedyqbloxqvzyweobmaaskolfnugwacz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009771.686301-1453-248172321508162/AnsiballZ_container_config_data.py'
Nov 24 18:42:52 compute-0 sudo[268629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:52 compute-0 python3.9[268631]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 24 18:42:52 compute-0 sudo[268629]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:42:52 compute-0 sudo[268781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvmdmzefhobccpnpsjqslamgbhfxcqgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009772.5278676-1462-216302573222594/AnsiballZ_container_config_hash.py'
Nov 24 18:42:52 compute-0 sudo[268781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:53 compute-0 ceph-mon[74927]: pgmap v809: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:53 compute-0 python3.9[268783]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 24 18:42:53 compute-0 sudo[268781]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:53 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v810: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:53 compute-0 sudo[268933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frrmulaobmybvagnsjeyrauoxexwcfyv ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764009773.4264712-1472-58894076744485/AnsiballZ_edpm_container_manage.py'
Nov 24 18:42:53 compute-0 sudo[268933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:53 compute-0 podman[268935]: 2025-11-24 18:42:53.921844418 +0000 UTC m=+0.136239979 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 24 18:42:54 compute-0 python3[268936]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 24 18:42:54 compute-0 podman[268998]: 2025-11-24 18:42:54.359923813 +0000 UTC m=+0.052035655 container create 8bfccfbfd425066c99ff87323aabc0b5530ac34ab41ffeb77e223003743eba60 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible)
Nov 24 18:42:54 compute-0 podman[268998]: 2025-11-24 18:42:54.333763992 +0000 UTC m=+0.025875874 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 24 18:42:54 compute-0 python3[268936]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Nov 24 18:42:54 compute-0 sudo[268933]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:55 compute-0 sudo[269186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvvjyimoihhxauwyntecaegcvcqpvslc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009774.722303-1480-94872981603218/AnsiballZ_stat.py'
Nov 24 18:42:55 compute-0 sudo[269186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:55 compute-0 ceph-mon[74927]: pgmap v810: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:55 compute-0 python3.9[269188]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:42:55 compute-0 sudo[269186]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:55 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v811: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:55 compute-0 podman[269314]: 2025-11-24 18:42:55.925083719 +0000 UTC m=+0.055227125 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 24 18:42:55 compute-0 sudo[269373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvrpzengqppafxdrmbnqnzvikarsyepw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009775.5864675-1489-99973779322563/AnsiballZ_file.py'
Nov 24 18:42:55 compute-0 sudo[269373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:55 compute-0 podman[269315]: 2025-11-24 18:42:55.947060775 +0000 UTC m=+0.077657572 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:42:56 compute-0 python3.9[269380]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:42:56 compute-0 sudo[269373]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:56 compute-0 sudo[269529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgbaaasvcyhytmhthizwzhydrzcwrpbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009776.204409-1489-113297567629772/AnsiballZ_copy.py'
Nov 24 18:42:56 compute-0 sudo[269529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:56 compute-0 python3.9[269531]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764009776.204409-1489-113297567629772/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 18:42:56 compute-0 sudo[269529]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:57 compute-0 sudo[269605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztfvbvkypsbooaotjzhgvxjlhzuiohpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009776.204409-1489-113297567629772/AnsiballZ_systemd.py'
Nov 24 18:42:57 compute-0 sudo[269605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:57 compute-0 ceph-mon[74927]: pgmap v811: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:57 compute-0 python3.9[269607]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 18:42:57 compute-0 systemd[1]: Reloading.
Nov 24 18:42:57 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v812: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:57 compute-0 systemd-rc-local-generator[269632]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:42:57 compute-0 systemd-sysv-generator[269636]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:42:57 compute-0 sudo[269605]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:42:58 compute-0 sudo[269716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leywplmluaehyafxeriaeqnanaraeozk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009776.204409-1489-113297567629772/AnsiballZ_systemd.py'
Nov 24 18:42:58 compute-0 sudo[269716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:42:58 compute-0 python3.9[269718]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 18:42:58 compute-0 systemd[1]: Reloading.
Nov 24 18:42:58 compute-0 systemd-rc-local-generator[269747]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 18:42:58 compute-0 systemd-sysv-generator[269752]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 18:42:58 compute-0 systemd[1]: Starting nova_compute container...
Nov 24 18:42:58 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:42:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa0216af483d77ec471622a589be635ab969f169ad184c06e2ca10bd6aa7a81/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 24 18:42:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa0216af483d77ec471622a589be635ab969f169ad184c06e2ca10bd6aa7a81/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 24 18:42:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa0216af483d77ec471622a589be635ab969f169ad184c06e2ca10bd6aa7a81/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 24 18:42:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa0216af483d77ec471622a589be635ab969f169ad184c06e2ca10bd6aa7a81/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 24 18:42:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa0216af483d77ec471622a589be635ab969f169ad184c06e2ca10bd6aa7a81/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 24 18:42:58 compute-0 podman[269758]: 2025-11-24 18:42:58.877143606 +0000 UTC m=+0.093701711 container init 8bfccfbfd425066c99ff87323aabc0b5530ac34ab41ffeb77e223003743eba60 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, container_name=nova_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 24 18:42:58 compute-0 podman[269758]: 2025-11-24 18:42:58.884863828 +0000 UTC m=+0.101421903 container start 8bfccfbfd425066c99ff87323aabc0b5530ac34ab41ffeb77e223003743eba60 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_id=edpm, container_name=nova_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 18:42:58 compute-0 podman[269758]: nova_compute
Nov 24 18:42:58 compute-0 nova_compute[269773]: + sudo -E kolla_set_configs
Nov 24 18:42:58 compute-0 systemd[1]: Started nova_compute container.
Nov 24 18:42:58 compute-0 sudo[269716]: pam_unix(sudo:session): session closed for user root
Nov 24 18:42:58 compute-0 nova_compute[269773]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 18:42:58 compute-0 nova_compute[269773]: INFO:__main__:Validating config file
Nov 24 18:42:58 compute-0 nova_compute[269773]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 18:42:58 compute-0 nova_compute[269773]: INFO:__main__:Copying service configuration files
Nov 24 18:42:58 compute-0 nova_compute[269773]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 24 18:42:58 compute-0 nova_compute[269773]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 24 18:42:58 compute-0 nova_compute[269773]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 24 18:42:58 compute-0 nova_compute[269773]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 24 18:42:58 compute-0 nova_compute[269773]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 24 18:42:58 compute-0 nova_compute[269773]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 24 18:42:58 compute-0 nova_compute[269773]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 24 18:42:58 compute-0 nova_compute[269773]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 24 18:42:58 compute-0 nova_compute[269773]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 24 18:42:58 compute-0 nova_compute[269773]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 24 18:42:58 compute-0 nova_compute[269773]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 24 18:42:58 compute-0 nova_compute[269773]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 24 18:42:58 compute-0 nova_compute[269773]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 24 18:42:58 compute-0 nova_compute[269773]: INFO:__main__:Deleting /etc/ceph
Nov 24 18:42:58 compute-0 nova_compute[269773]: INFO:__main__:Creating directory /etc/ceph
Nov 24 18:42:58 compute-0 nova_compute[269773]: INFO:__main__:Setting permission for /etc/ceph
Nov 24 18:42:58 compute-0 nova_compute[269773]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 24 18:42:58 compute-0 nova_compute[269773]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 24 18:42:58 compute-0 nova_compute[269773]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 24 18:42:58 compute-0 nova_compute[269773]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 24 18:42:58 compute-0 nova_compute[269773]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 24 18:42:58 compute-0 nova_compute[269773]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 24 18:42:58 compute-0 nova_compute[269773]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 24 18:42:58 compute-0 nova_compute[269773]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 24 18:42:58 compute-0 nova_compute[269773]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 24 18:42:58 compute-0 nova_compute[269773]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 24 18:42:58 compute-0 nova_compute[269773]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 24 18:42:58 compute-0 nova_compute[269773]: INFO:__main__:Writing out command to execute
Nov 24 18:42:58 compute-0 nova_compute[269773]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 24 18:42:58 compute-0 nova_compute[269773]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 24 18:42:58 compute-0 nova_compute[269773]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 24 18:42:58 compute-0 nova_compute[269773]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 24 18:42:58 compute-0 nova_compute[269773]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 24 18:42:58 compute-0 nova_compute[269773]: ++ cat /run_command
Nov 24 18:42:58 compute-0 nova_compute[269773]: + CMD=nova-compute
Nov 24 18:42:58 compute-0 nova_compute[269773]: + ARGS=
Nov 24 18:42:58 compute-0 nova_compute[269773]: + sudo kolla_copy_cacerts
Nov 24 18:42:58 compute-0 nova_compute[269773]: + [[ ! -n '' ]]
Nov 24 18:42:58 compute-0 nova_compute[269773]: + . kolla_extend_start
Nov 24 18:42:58 compute-0 nova_compute[269773]: + echo 'Running command: '\''nova-compute'\'''
Nov 24 18:42:58 compute-0 nova_compute[269773]: + umask 0022
Nov 24 18:42:58 compute-0 nova_compute[269773]: Running command: 'nova-compute'
Nov 24 18:42:58 compute-0 nova_compute[269773]: + exec nova-compute
Nov 24 18:42:59 compute-0 ceph-mon[74927]: pgmap v812: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:59 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v813: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:42:59 compute-0 python3.9[269934]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:43:00 compute-0 python3.9[270085]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:43:01 compute-0 ceph-mon[74927]: pgmap v813: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:01 compute-0 nova_compute[269773]: 2025-11-24 18:43:01.451 269777 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 24 18:43:01 compute-0 nova_compute[269773]: 2025-11-24 18:43:01.451 269777 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 24 18:43:01 compute-0 nova_compute[269773]: 2025-11-24 18:43:01.451 269777 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 24 18:43:01 compute-0 nova_compute[269773]: 2025-11-24 18:43:01.451 269777 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 24 18:43:01 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v814: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:01 compute-0 python3.9[270235]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 18:43:01 compute-0 nova_compute[269773]: 2025-11-24 18:43:01.598 269777 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:43:01 compute-0 nova_compute[269773]: 2025-11-24 18:43:01.632 269777 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:43:01 compute-0 nova_compute[269773]: 2025-11-24 18:43:01.632 269777 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.241 269777 INFO nova.virt.driver [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 24 18:43:02 compute-0 sudo[270389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oilrcvmyyburguysgnmurwkjyazwvqht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009781.7729464-1549-114393527161550/AnsiballZ_podman_container.py'
Nov 24 18:43:02 compute-0 sudo[270389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.462 269777 INFO nova.compute.provider_config [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.481 269777 DEBUG oslo_concurrency.lockutils [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.481 269777 DEBUG oslo_concurrency.lockutils [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.481 269777 DEBUG oslo_concurrency.lockutils [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.482 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.482 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.482 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.482 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.482 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.482 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.483 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.483 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.483 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.483 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.483 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.484 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.484 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.484 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.484 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.484 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.484 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.484 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.485 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.485 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.485 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.485 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.485 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.485 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.485 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.486 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.486 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.486 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.486 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.486 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.486 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.487 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.487 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.487 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.487 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.487 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.487 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.487 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.488 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.488 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.488 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.488 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.488 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.489 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.489 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.489 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.489 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.489 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.489 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.489 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.490 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.490 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.490 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.490 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.490 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.490 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.491 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.491 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.491 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.491 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.491 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.491 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.491 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.491 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.492 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.492 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.492 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.492 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.492 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.492 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.493 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.493 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.493 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.493 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.493 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.493 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.494 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.494 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.494 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.494 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.494 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.494 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.495 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.495 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.495 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.495 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.495 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.495 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.496 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.496 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.496 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.496 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.496 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.496 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.497 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.497 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.497 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.497 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.497 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.497 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.498 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.498 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.498 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.498 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.498 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.498 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.498 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.499 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.499 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.499 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.499 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.499 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.500 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.500 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.500 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.500 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.500 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.500 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.501 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.501 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.501 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.501 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.501 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.501 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.502 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.502 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.502 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.502 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.502 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.502 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.502 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.503 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.503 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.503 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.503 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.503 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.503 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.503 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.504 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.504 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.504 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.504 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.504 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.504 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.505 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.505 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.505 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.505 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.505 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.505 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.506 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.506 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.506 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.506 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.506 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.506 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.506 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.507 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.507 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.507 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.507 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.507 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.507 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.508 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.508 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.508 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.508 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.508 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.508 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.509 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.509 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.509 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.509 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.509 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.509 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.510 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.510 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.510 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.510 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.510 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.511 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.511 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.511 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.511 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.511 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.511 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.512 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.512 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.512 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.512 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.512 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.513 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.513 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.513 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.513 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.513 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.513 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.514 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.514 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.514 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.514 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.514 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.514 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.514 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.515 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.515 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.515 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.515 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.515 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.516 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.516 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.516 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.516 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.516 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.516 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.517 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.517 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.517 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.517 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.517 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.517 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.517 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.518 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.518 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.518 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.518 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.518 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.518 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.518 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.519 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.519 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.519 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.519 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.519 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.519 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.520 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.520 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.520 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.520 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.521 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.521 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.521 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.521 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.521 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.521 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.521 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.522 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.522 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.522 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.522 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.522 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.522 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.522 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.522 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.523 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.523 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.523 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.523 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.523 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.524 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.524 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.524 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.524 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.524 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.524 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.525 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.525 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.525 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.525 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.525 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.525 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.525 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.526 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.526 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.526 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.526 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.526 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.526 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.526 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.527 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.527 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.527 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.527 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.527 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.528 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.528 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.528 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.528 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.528 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.529 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.529 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.529 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.529 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.529 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.530 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.530 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.530 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.530 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.530 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.531 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.531 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.531 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.531 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.531 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.532 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.532 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.532 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.532 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.532 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.532 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.533 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.533 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.533 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.533 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.533 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.534 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.534 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.534 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.534 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.534 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.534 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.535 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.535 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.535 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.535 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.535 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.535 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.536 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.536 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.536 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.536 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.536 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.536 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.537 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.537 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.537 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.537 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.537 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.537 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.538 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.538 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.538 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.538 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.538 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.538 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.539 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.539 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.539 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.539 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.539 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.539 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.539 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.539 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.540 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.540 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.540 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.540 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.540 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.541 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.541 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.541 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.541 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.541 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.541 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.541 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.542 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.542 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.542 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.542 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.542 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.542 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.542 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.543 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.543 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.543 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.543 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.543 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.543 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.543 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.544 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.544 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.544 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.544 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.544 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.544 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.545 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.545 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.545 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.545 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.545 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.545 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.545 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.546 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.546 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.546 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.546 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.546 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.546 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.546 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.547 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.547 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.547 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.547 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.547 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.547 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.547 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.548 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.548 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.548 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.548 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.548 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.548 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.548 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.549 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.549 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.549 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.549 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.549 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.549 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.550 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.550 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.550 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.550 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.550 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.550 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.550 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.551 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.551 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.551 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.551 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.551 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.551 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.551 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.552 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.552 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.552 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.552 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.552 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.552 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.552 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.553 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.553 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.553 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.553 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.553 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.553 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.554 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.554 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.554 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.554 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.554 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.554 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.554 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.555 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.555 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.555 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.555 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.555 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.555 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.556 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.556 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.556 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.556 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.556 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.556 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.556 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.557 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.557 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.557 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.557 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.557 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.557 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.557 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.558 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.558 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.558 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.558 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.558 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.558 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.558 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.559 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.559 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.559 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.559 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.559 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.559 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.560 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.560 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.560 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.560 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.560 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.560 269777 WARNING oslo_config.cfg [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 24 18:43:02 compute-0 nova_compute[269773]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 24 18:43:02 compute-0 nova_compute[269773]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 24 18:43:02 compute-0 nova_compute[269773]: and ``live_migration_inbound_addr`` respectively.
Nov 24 18:43:02 compute-0 nova_compute[269773]: ).  Its value may be silently ignored in the future.
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.561 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.561 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.561 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.561 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.561 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.561 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.562 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.562 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.562 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.562 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.562 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.562 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.562 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.563 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.563 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.563 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.563 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.563 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.563 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.rbd_secret_uuid        = e5ee928f-099b-569b-93c9-ecf025cbb50d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.564 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.564 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.564 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.564 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.564 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.564 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.564 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.565 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.565 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.565 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.565 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.565 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.565 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.566 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.566 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.566 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.566 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.566 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.566 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.566 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.567 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.567 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.567 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.567 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.567 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.568 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.568 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.568 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.568 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.568 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.568 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.568 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.569 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.569 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.569 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.569 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.569 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.569 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.570 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.570 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.570 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.570 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.570 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.570 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.570 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.571 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.571 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.571 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.571 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.571 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.571 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.572 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.572 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.572 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.572 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.572 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.572 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.572 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.573 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.573 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.573 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.573 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.573 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.573 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.573 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.574 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.574 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.574 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.574 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.574 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.574 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.574 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.575 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.575 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.575 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.575 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.575 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.575 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.575 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.576 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.576 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.576 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.576 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.576 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.576 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.577 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.577 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.577 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.577 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.577 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.577 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.577 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.578 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.578 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.578 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.578 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.578 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.578 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.578 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.578 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.579 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.579 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.579 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.579 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.579 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.579 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.579 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.580 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.580 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.580 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.580 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.580 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.580 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.580 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.581 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.581 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.581 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.581 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.581 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.581 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.582 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.582 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.582 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.582 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.582 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.582 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.582 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.583 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.583 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.583 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.583 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.583 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.583 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.583 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.584 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.584 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.584 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.584 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.584 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.584 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.585 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.585 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.585 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.585 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.585 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.585 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.585 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.586 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.586 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.586 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.586 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.586 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.586 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.586 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.586 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.587 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.587 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.587 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.587 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.587 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.587 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.588 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.588 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.588 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.588 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.588 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.588 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.588 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.589 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.589 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.589 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.589 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.589 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.589 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.589 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.590 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.590 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.590 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.590 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.590 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.590 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.591 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.591 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.591 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.591 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.591 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.591 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.591 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.592 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.592 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.592 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.592 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.592 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.592 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.592 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.593 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.593 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.593 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.593 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.593 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.593 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.593 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.594 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.594 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.594 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.594 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.594 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.594 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.594 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.595 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.595 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.595 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.595 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.595 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.595 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.595 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.596 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.596 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.596 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.596 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.596 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.596 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.597 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.597 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.597 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.597 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.597 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.597 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.597 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.598 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.598 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.598 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.598 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.598 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.598 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.599 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.599 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.599 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.599 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.599 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.599 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.599 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.600 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.600 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.600 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.600 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.600 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.600 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.601 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.601 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.601 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.601 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.601 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.601 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.601 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.602 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.602 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.602 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.602 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.602 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.602 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.602 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.603 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.603 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.603 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.603 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.603 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.603 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.603 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.604 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.604 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.604 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.604 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.604 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.604 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.605 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.605 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.605 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.605 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.605 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.605 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.605 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.606 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.606 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.606 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.606 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.606 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.606 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.606 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.607 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.607 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.607 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.607 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.607 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.607 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.607 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.608 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.608 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.608 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.608 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.608 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.608 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.608 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.609 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.609 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.609 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.609 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.609 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.609 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.609 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.610 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.610 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.610 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.610 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.610 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.610 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.610 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.611 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.611 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.611 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.611 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.611 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.611 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.611 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.612 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.612 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.612 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.612 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.612 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.612 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.613 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.613 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.613 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.613 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.613 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.613 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.613 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.613 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.614 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.614 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.614 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.614 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.614 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.614 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.614 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.615 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.615 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.615 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.615 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.615 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.615 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.615 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.616 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.616 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.616 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.616 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.616 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 python3.9[270391]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.616 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.617 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.617 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.617 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.617 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.617 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.617 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.617 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.618 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.618 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.618 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.618 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.618 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.618 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.619 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.619 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.619 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.619 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.619 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.619 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.619 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.620 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.620 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.620 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.620 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.620 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.620 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.620 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.621 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.621 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.621 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.621 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.621 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.621 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.621 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.622 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.622 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.622 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.622 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.622 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.622 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.622 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.623 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.623 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.623 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.623 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.623 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.623 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.624 269777 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 24 18:43:02 compute-0 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.641 269777 DEBUG nova.virt.libvirt.host [None req-a82e4c6c-504a-48f6-860b-ffeac4708421 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.642 269777 DEBUG nova.virt.libvirt.host [None req-a82e4c6c-504a-48f6-860b-ffeac4708421 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.642 269777 DEBUG nova.virt.libvirt.host [None req-a82e4c6c-504a-48f6-860b-ffeac4708421 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.642 269777 DEBUG nova.virt.libvirt.host [None req-a82e4c6c-504a-48f6-860b-ffeac4708421 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 24 18:43:02 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Nov 24 18:43:02 compute-0 systemd[1]: Started libvirt QEMU daemon.
Nov 24 18:43:02 compute-0 sudo[270389]: pam_unix(sudo:session): session closed for user root
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.733 269777 DEBUG nova.virt.libvirt.host [None req-a82e4c6c-504a-48f6-860b-ffeac4708421 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fd49633f910> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.736 269777 DEBUG nova.virt.libvirt.host [None req-a82e4c6c-504a-48f6-860b-ffeac4708421 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fd49633f910> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.736 269777 INFO nova.virt.libvirt.driver [None req-a82e4c6c-504a-48f6-860b-ffeac4708421 - - - - - -] Connection event '1' reason 'None'
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.754 269777 WARNING nova.virt.libvirt.driver [None req-a82e4c6c-504a-48f6-860b-ffeac4708421 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 24 18:43:02 compute-0 nova_compute[269773]: 2025-11-24 18:43:02.754 269777 DEBUG nova.virt.libvirt.volume.mount [None req-a82e4c6c-504a-48f6-860b-ffeac4708421 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 24 18:43:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:43:03 compute-0 ceph-mon[74927]: pgmap v814: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:03 compute-0 sudo[270624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgymlgeaknukhijkofywjqtfobyipare ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009782.9819465-1557-136713121305009/AnsiballZ_systemd.py'
Nov 24 18:43:03 compute-0 sudo[270624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:43:03 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v815: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:03 compute-0 nova_compute[269773]: 2025-11-24 18:43:03.637 269777 INFO nova.virt.libvirt.host [None req-a82e4c6c-504a-48f6-860b-ffeac4708421 - - - - - -] Libvirt host capabilities <capabilities>
Nov 24 18:43:03 compute-0 nova_compute[269773]: 
Nov 24 18:43:03 compute-0 nova_compute[269773]:   <host>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <uuid>ce8f254e-4b98-4140-abc7-8040b35476ad</uuid>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <cpu>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <arch>x86_64</arch>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model>EPYC-Rome-v4</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <vendor>AMD</vendor>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <microcode version='16777317'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <signature family='23' model='49' stepping='0'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <maxphysaddr mode='emulate' bits='40'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature name='x2apic'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature name='tsc-deadline'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature name='osxsave'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature name='hypervisor'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature name='tsc_adjust'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature name='spec-ctrl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature name='stibp'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature name='arch-capabilities'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature name='ssbd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature name='cmp_legacy'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature name='topoext'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature name='virt-ssbd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature name='lbrv'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature name='tsc-scale'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature name='vmcb-clean'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature name='pause-filter'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature name='pfthreshold'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature name='svme-addr-chk'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature name='rdctl-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature name='skip-l1dfl-vmentry'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature name='mds-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature name='pschange-mc-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <pages unit='KiB' size='4'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <pages unit='KiB' size='2048'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <pages unit='KiB' size='1048576'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </cpu>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <power_management>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <suspend_mem/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </power_management>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <iommu support='no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <migration_features>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <live/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <uri_transports>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <uri_transport>tcp</uri_transport>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <uri_transport>rdma</uri_transport>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </uri_transports>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </migration_features>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <topology>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <cells num='1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <cell id='0'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:           <memory unit='KiB'>7864320</memory>
Nov 24 18:43:03 compute-0 nova_compute[269773]:           <pages unit='KiB' size='4'>1966080</pages>
Nov 24 18:43:03 compute-0 nova_compute[269773]:           <pages unit='KiB' size='2048'>0</pages>
Nov 24 18:43:03 compute-0 nova_compute[269773]:           <pages unit='KiB' size='1048576'>0</pages>
Nov 24 18:43:03 compute-0 nova_compute[269773]:           <distances>
Nov 24 18:43:03 compute-0 nova_compute[269773]:             <sibling id='0' value='10'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:           </distances>
Nov 24 18:43:03 compute-0 nova_compute[269773]:           <cpus num='8'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:           </cpus>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         </cell>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </cells>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </topology>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <cache>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </cache>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <secmodel>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model>selinux</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <doi>0</doi>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </secmodel>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <secmodel>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model>dac</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <doi>0</doi>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <baselabel type='kvm'>+107:+107</baselabel>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <baselabel type='qemu'>+107:+107</baselabel>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </secmodel>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   </host>
Nov 24 18:43:03 compute-0 nova_compute[269773]: 
Nov 24 18:43:03 compute-0 nova_compute[269773]:   <guest>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <os_type>hvm</os_type>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <arch name='i686'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <wordsize>32</wordsize>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <domain type='qemu'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <domain type='kvm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </arch>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <features>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <pae/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <nonpae/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <acpi default='on' toggle='yes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <apic default='on' toggle='no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <cpuselection/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <deviceboot/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <disksnapshot default='on' toggle='no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <externalSnapshot/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </features>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   </guest>
Nov 24 18:43:03 compute-0 nova_compute[269773]: 
Nov 24 18:43:03 compute-0 nova_compute[269773]:   <guest>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <os_type>hvm</os_type>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <arch name='x86_64'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <wordsize>64</wordsize>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <domain type='qemu'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <domain type='kvm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </arch>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <features>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <acpi default='on' toggle='yes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <apic default='on' toggle='no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <cpuselection/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <deviceboot/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <disksnapshot default='on' toggle='no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <externalSnapshot/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </features>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   </guest>
Nov 24 18:43:03 compute-0 nova_compute[269773]: 
Nov 24 18:43:03 compute-0 nova_compute[269773]: </capabilities>
Nov 24 18:43:03 compute-0 nova_compute[269773]: 
Nov 24 18:43:03 compute-0 python3.9[270626]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 18:43:03 compute-0 nova_compute[269773]: 2025-11-24 18:43:03.650 269777 DEBUG nova.virt.libvirt.host [None req-a82e4c6c-504a-48f6-860b-ffeac4708421 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 24 18:43:03 compute-0 nova_compute[269773]: 2025-11-24 18:43:03.672 269777 DEBUG nova.virt.libvirt.host [None req-a82e4c6c-504a-48f6-860b-ffeac4708421 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 24 18:43:03 compute-0 nova_compute[269773]: <domainCapabilities>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   <path>/usr/libexec/qemu-kvm</path>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   <domain>kvm</domain>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   <arch>i686</arch>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   <vcpu max='4096'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   <iothreads supported='yes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   <os supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <enum name='firmware'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <loader supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='type'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>rom</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>pflash</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='readonly'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>yes</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>no</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='secure'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>no</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </loader>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   </os>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   <cpu>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <mode name='host-passthrough' supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='hostPassthroughMigratable'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>on</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>off</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </mode>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <mode name='maximum' supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='maximumMigratable'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>on</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>off</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </mode>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <mode name='host-model' supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <vendor>AMD</vendor>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='x2apic'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='tsc-deadline'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='hypervisor'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='tsc_adjust'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='spec-ctrl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='stibp'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='ssbd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='cmp_legacy'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='overflow-recov'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='succor'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='ibrs'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='amd-ssbd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='virt-ssbd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='lbrv'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='tsc-scale'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='vmcb-clean'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='flushbyasid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='pause-filter'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='pfthreshold'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='svme-addr-chk'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='disable' name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </mode>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <mode name='custom' supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Broadwell'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Broadwell-IBRS'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Broadwell-noTSX'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Broadwell-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Broadwell-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Broadwell-v3'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Broadwell-v4'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Cascadelake-Server'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Cascadelake-Server-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Cascadelake-Server-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Cascadelake-Server-v3'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Cascadelake-Server-v4'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Cascadelake-Server-v5'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Cooperlake'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Cooperlake-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Cooperlake-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Denverton'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='mpx'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Denverton-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='mpx'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Denverton-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Denverton-v3'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Dhyana-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='EPYC-Genoa'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amd-psfd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='auto-ibrs'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='no-nested-data-bp'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='null-sel-clr-base'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='stibp-always-on'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='EPYC-Genoa-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amd-psfd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='auto-ibrs'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='no-nested-data-bp'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='null-sel-clr-base'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='stibp-always-on'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='EPYC-Milan'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='EPYC-Milan-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='EPYC-Milan-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amd-psfd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='no-nested-data-bp'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='null-sel-clr-base'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='stibp-always-on'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='EPYC-Rome'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='EPYC-Rome-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='EPYC-Rome-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='EPYC-Rome-v3'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='EPYC-v3'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='EPYC-v4'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='GraniteRapids'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-fp16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-int8'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-tile'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-fp16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fbsdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrc'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrs'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fzrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='mcdt-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pbrsb-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='prefetchiti'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='psdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='serialize'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xfd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='GraniteRapids-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-fp16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-int8'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-tile'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-fp16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fbsdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrc'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrs'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fzrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='mcdt-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pbrsb-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='prefetchiti'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='psdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='serialize'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xfd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='GraniteRapids-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-fp16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-int8'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-tile'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx10'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx10-128'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx10-256'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx10-512'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-fp16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='cldemote'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fbsdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrc'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrs'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fzrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='mcdt-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdir64b'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdiri'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pbrsb-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='prefetchiti'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='psdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='serialize'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ss'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xfd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Haswell'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Haswell-IBRS'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Haswell-noTSX'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Haswell-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Haswell-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Haswell-v3'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Haswell-v4'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Icelake-Server'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Icelake-Server-noTSX'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Icelake-Server-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Icelake-Server-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Icelake-Server-v3'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Icelake-Server-v4'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Icelake-Server-v5'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Icelake-Server-v6'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Icelake-Server-v7'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='IvyBridge'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='IvyBridge-IBRS'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='IvyBridge-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='IvyBridge-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='KnightsMill'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-4fmaps'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-4vnniw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512er'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512pf'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ss'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='KnightsMill-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-4fmaps'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-4vnniw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512er'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512pf'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ss'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Opteron_G4'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fma4'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xop'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Opteron_G4-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fma4'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xop'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Opteron_G5'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fma4'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='tbm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xop'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Opteron_G5-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fma4'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='tbm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xop'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='SapphireRapids'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-int8'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-tile'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-fp16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrc'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrs'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fzrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='serialize'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xfd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='SapphireRapids-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-int8'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-tile'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-fp16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrc'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrs'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fzrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='serialize'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xfd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='SapphireRapids-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-int8'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-tile'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-fp16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fbsdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrc'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrs'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fzrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='psdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='serialize'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xfd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='SapphireRapids-v3'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-int8'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-tile'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-fp16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='cldemote'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fbsdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrc'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrs'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fzrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdir64b'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdiri'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='psdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='serialize'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ss'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xfd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='SierraForest'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-ne-convert'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-vnni-int8'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='cmpccxadd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fbsdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrs'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='mcdt-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pbrsb-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='psdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='serialize'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='SierraForest-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-ne-convert'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-vnni-int8'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='cmpccxadd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fbsdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrs'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='mcdt-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pbrsb-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='psdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='serialize'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Client'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Client-IBRS'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Client-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Client-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Client-v3'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Client-v4'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Server'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Server-IBRS'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 systemd[1]: Stopping nova_compute container...
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Server-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Server-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Server-v3'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Server-v4'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Server-v5'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Snowridge'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='cldemote'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='core-capability'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdir64b'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdiri'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='mpx'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='split-lock-detect'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Snowridge-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='cldemote'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='core-capability'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdir64b'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdiri'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='mpx'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='split-lock-detect'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Snowridge-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='cldemote'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='core-capability'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdir64b'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdiri'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='split-lock-detect'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Snowridge-v3'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='cldemote'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='core-capability'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdir64b'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdiri'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='split-lock-detect'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Snowridge-v4'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='cldemote'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdir64b'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdiri'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='athlon'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='3dnow'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='3dnowext'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='athlon-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='3dnow'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='3dnowext'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='core2duo'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ss'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='core2duo-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ss'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='coreduo'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ss'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='coreduo-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ss'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='n270'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ss'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='n270-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ss'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='phenom'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='3dnow'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='3dnowext'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='phenom-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='3dnow'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='3dnowext'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </mode>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   </cpu>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   <memoryBacking supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <enum name='sourceType'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <value>file</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <value>anonymous</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <value>memfd</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   </memoryBacking>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   <devices>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <disk supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='diskDevice'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>disk</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>cdrom</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>floppy</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>lun</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='bus'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>fdc</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>scsi</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>virtio</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>usb</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>sata</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='model'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>virtio</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>virtio-transitional</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>virtio-non-transitional</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </disk>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <graphics supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='type'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>vnc</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>egl-headless</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>dbus</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </graphics>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <video supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='modelType'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>vga</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>cirrus</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>virtio</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>none</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>bochs</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>ramfb</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </video>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <hostdev supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='mode'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>subsystem</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='startupPolicy'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>default</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>mandatory</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>requisite</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>optional</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='subsysType'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>usb</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>pci</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>scsi</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='capsType'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='pciBackend'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </hostdev>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <rng supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='model'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>virtio</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>virtio-transitional</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>virtio-non-transitional</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='backendModel'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>random</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>egd</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>builtin</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </rng>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <filesystem supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='driverType'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>path</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>handle</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>virtiofs</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </filesystem>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <tpm supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='model'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>tpm-tis</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>tpm-crb</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='backendModel'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>emulator</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>external</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='backendVersion'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>2.0</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </tpm>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <redirdev supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='bus'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>usb</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </redirdev>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <channel supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='type'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>pty</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>unix</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </channel>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <crypto supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='model'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='type'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>qemu</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='backendModel'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>builtin</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </crypto>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <interface supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='backendType'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>default</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>passt</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </interface>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <panic supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='model'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>isa</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>hyperv</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </panic>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <console supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='type'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>null</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>vc</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>pty</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>dev</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>file</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>pipe</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>stdio</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>udp</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>tcp</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>unix</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>qemu-vdagent</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>dbus</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </console>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   </devices>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   <features>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <gic supported='no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <vmcoreinfo supported='yes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <genid supported='yes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <backingStoreInput supported='yes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <backup supported='yes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <async-teardown supported='yes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <ps2 supported='yes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <sev supported='no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <sgx supported='no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <hyperv supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='features'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>relaxed</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>vapic</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>spinlocks</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>vpindex</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>runtime</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>synic</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>stimer</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>reset</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>vendor_id</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>frequencies</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>reenlightenment</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>tlbflush</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>ipi</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>avic</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>emsr_bitmap</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>xmm_input</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <defaults>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <spinlocks>4095</spinlocks>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <stimer_direct>on</stimer_direct>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <tlbflush_direct>on</tlbflush_direct>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <tlbflush_extended>on</tlbflush_extended>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </defaults>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </hyperv>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <launchSecurity supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='sectype'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>tdx</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </launchSecurity>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   </features>
Nov 24 18:43:03 compute-0 nova_compute[269773]: </domainCapabilities>
Nov 24 18:43:03 compute-0 nova_compute[269773]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 24 18:43:03 compute-0 nova_compute[269773]: 2025-11-24 18:43:03.685 269777 DEBUG nova.virt.libvirt.host [None req-a82e4c6c-504a-48f6-860b-ffeac4708421 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 24 18:43:03 compute-0 nova_compute[269773]: <domainCapabilities>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   <path>/usr/libexec/qemu-kvm</path>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   <domain>kvm</domain>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   <arch>i686</arch>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   <vcpu max='240'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   <iothreads supported='yes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   <os supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <enum name='firmware'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <loader supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='type'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>rom</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>pflash</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='readonly'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>yes</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>no</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='secure'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>no</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </loader>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   </os>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   <cpu>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <mode name='host-passthrough' supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='hostPassthroughMigratable'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>on</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>off</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </mode>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <mode name='maximum' supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='maximumMigratable'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>on</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>off</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </mode>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <mode name='host-model' supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <vendor>AMD</vendor>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='x2apic'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='tsc-deadline'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='hypervisor'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='tsc_adjust'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='spec-ctrl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='stibp'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='ssbd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='cmp_legacy'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='overflow-recov'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='succor'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='ibrs'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='amd-ssbd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='virt-ssbd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='lbrv'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='tsc-scale'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='vmcb-clean'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='flushbyasid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='pause-filter'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='pfthreshold'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='svme-addr-chk'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='disable' name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </mode>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <mode name='custom' supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Broadwell'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Broadwell-IBRS'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Broadwell-noTSX'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Broadwell-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Broadwell-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Broadwell-v3'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Broadwell-v4'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Cascadelake-Server'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Cascadelake-Server-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Cascadelake-Server-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Cascadelake-Server-v3'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Cascadelake-Server-v4'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Cascadelake-Server-v5'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Cooperlake'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Cooperlake-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Cooperlake-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Denverton'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='mpx'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Denverton-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='mpx'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Denverton-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Denverton-v3'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Dhyana-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='EPYC-Genoa'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amd-psfd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='auto-ibrs'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='no-nested-data-bp'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='null-sel-clr-base'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='stibp-always-on'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='EPYC-Genoa-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amd-psfd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='auto-ibrs'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='no-nested-data-bp'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='null-sel-clr-base'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='stibp-always-on'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='EPYC-Milan'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='EPYC-Milan-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='EPYC-Milan-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amd-psfd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='no-nested-data-bp'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='null-sel-clr-base'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='stibp-always-on'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='EPYC-Rome'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='EPYC-Rome-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='EPYC-Rome-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='EPYC-Rome-v3'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='EPYC-v3'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='EPYC-v4'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='GraniteRapids'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-fp16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-int8'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-tile'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-fp16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fbsdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrc'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrs'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fzrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='mcdt-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pbrsb-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='prefetchiti'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='psdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='serialize'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xfd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='GraniteRapids-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-fp16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-int8'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-tile'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-fp16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fbsdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrc'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrs'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fzrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='mcdt-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pbrsb-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='prefetchiti'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='psdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='serialize'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xfd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='GraniteRapids-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-fp16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-int8'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-tile'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx10'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx10-128'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx10-256'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx10-512'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-fp16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='cldemote'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fbsdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrc'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrs'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fzrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='mcdt-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdir64b'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdiri'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pbrsb-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='prefetchiti'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='psdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='serialize'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ss'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xfd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Haswell'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Haswell-IBRS'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Haswell-noTSX'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Haswell-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Haswell-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Haswell-v3'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Haswell-v4'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Icelake-Server'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Icelake-Server-noTSX'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Icelake-Server-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Icelake-Server-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Icelake-Server-v3'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Icelake-Server-v4'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Icelake-Server-v5'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Icelake-Server-v6'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Icelake-Server-v7'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='IvyBridge'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='IvyBridge-IBRS'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='IvyBridge-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='IvyBridge-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='KnightsMill'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-4fmaps'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-4vnniw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512er'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512pf'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ss'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='KnightsMill-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-4fmaps'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-4vnniw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512er'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512pf'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ss'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Opteron_G4'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fma4'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xop'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Opteron_G4-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fma4'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xop'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Opteron_G5'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fma4'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='tbm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xop'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Opteron_G5-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fma4'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='tbm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xop'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='SapphireRapids'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-int8'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-tile'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-fp16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrc'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrs'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fzrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='serialize'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xfd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='SapphireRapids-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-int8'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-tile'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-fp16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrc'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrs'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fzrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='serialize'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xfd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='SapphireRapids-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-int8'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-tile'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-fp16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fbsdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrc'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrs'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fzrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='psdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='serialize'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xfd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='SapphireRapids-v3'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-int8'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-tile'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-fp16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='cldemote'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fbsdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrc'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrs'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fzrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdir64b'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdiri'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='psdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='serialize'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ss'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xfd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='SierraForest'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-ne-convert'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-vnni-int8'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='cmpccxadd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fbsdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrs'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='mcdt-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pbrsb-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='psdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='serialize'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='SierraForest-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-ne-convert'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-vnni-int8'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='cmpccxadd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fbsdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrs'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='mcdt-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pbrsb-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='psdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='serialize'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Client'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Client-IBRS'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Client-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Client-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Client-v3'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Client-v4'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Server'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Server-IBRS'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Server-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Server-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Server-v3'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Server-v4'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Server-v5'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Snowridge'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='cldemote'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='core-capability'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdir64b'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdiri'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='mpx'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='split-lock-detect'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Snowridge-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='cldemote'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='core-capability'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdir64b'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdiri'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='mpx'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='split-lock-detect'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Snowridge-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='cldemote'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='core-capability'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdir64b'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdiri'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='split-lock-detect'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Snowridge-v3'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='cldemote'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='core-capability'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdir64b'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdiri'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='split-lock-detect'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Snowridge-v4'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='cldemote'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdir64b'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdiri'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='athlon'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='3dnow'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='3dnowext'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='athlon-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='3dnow'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='3dnowext'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='core2duo'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ss'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='core2duo-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ss'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='coreduo'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ss'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='coreduo-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ss'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='n270'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ss'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='n270-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ss'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='phenom'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='3dnow'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='3dnowext'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='phenom-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='3dnow'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='3dnowext'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </mode>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   </cpu>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   <memoryBacking supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <enum name='sourceType'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <value>file</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <value>anonymous</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <value>memfd</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   </memoryBacking>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   <devices>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <disk supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='diskDevice'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>disk</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>cdrom</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>floppy</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>lun</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='bus'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>ide</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>fdc</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>scsi</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>virtio</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>usb</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>sata</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='model'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>virtio</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>virtio-transitional</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>virtio-non-transitional</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </disk>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <graphics supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='type'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>vnc</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>egl-headless</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>dbus</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </graphics>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <video supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='modelType'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>vga</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>cirrus</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>virtio</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>none</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>bochs</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>ramfb</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </video>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <hostdev supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='mode'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>subsystem</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='startupPolicy'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>default</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>mandatory</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>requisite</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>optional</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='subsysType'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>usb</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>pci</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>scsi</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='capsType'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='pciBackend'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </hostdev>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <rng supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='model'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>virtio</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>virtio-transitional</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>virtio-non-transitional</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='backendModel'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>random</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>egd</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>builtin</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </rng>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <filesystem supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='driverType'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>path</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>handle</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>virtiofs</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </filesystem>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <tpm supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='model'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>tpm-tis</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>tpm-crb</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='backendModel'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>emulator</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>external</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='backendVersion'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>2.0</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </tpm>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <redirdev supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='bus'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>usb</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </redirdev>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <channel supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='type'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>pty</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>unix</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </channel>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <crypto supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='model'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='type'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>qemu</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='backendModel'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>builtin</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </crypto>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <interface supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='backendType'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>default</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>passt</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </interface>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <panic supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='model'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>isa</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>hyperv</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </panic>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <console supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='type'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>null</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>vc</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>pty</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>dev</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>file</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>pipe</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>stdio</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>udp</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>tcp</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>unix</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>qemu-vdagent</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>dbus</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </console>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   </devices>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   <features>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <gic supported='no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <vmcoreinfo supported='yes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <genid supported='yes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <backingStoreInput supported='yes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <backup supported='yes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <async-teardown supported='yes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <ps2 supported='yes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <sev supported='no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <sgx supported='no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <hyperv supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='features'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>relaxed</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>vapic</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>spinlocks</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>vpindex</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>runtime</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>synic</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>stimer</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>reset</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>vendor_id</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>frequencies</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>reenlightenment</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>tlbflush</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>ipi</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>avic</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>emsr_bitmap</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>xmm_input</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <defaults>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <spinlocks>4095</spinlocks>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <stimer_direct>on</stimer_direct>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <tlbflush_direct>on</tlbflush_direct>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <tlbflush_extended>on</tlbflush_extended>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </defaults>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </hyperv>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <launchSecurity supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='sectype'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>tdx</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </launchSecurity>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   </features>
Nov 24 18:43:03 compute-0 nova_compute[269773]: </domainCapabilities>
Nov 24 18:43:03 compute-0 nova_compute[269773]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 24 18:43:03 compute-0 nova_compute[269773]: 2025-11-24 18:43:03.707 269777 DEBUG nova.virt.libvirt.host [None req-a82e4c6c-504a-48f6-860b-ffeac4708421 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 24 18:43:03 compute-0 nova_compute[269773]: 2025-11-24 18:43:03.712 269777 DEBUG nova.virt.libvirt.host [None req-a82e4c6c-504a-48f6-860b-ffeac4708421 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 24 18:43:03 compute-0 nova_compute[269773]: <domainCapabilities>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   <path>/usr/libexec/qemu-kvm</path>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   <domain>kvm</domain>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   <arch>x86_64</arch>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   <vcpu max='4096'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   <iothreads supported='yes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   <os supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <enum name='firmware'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <value>efi</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <loader supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='type'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>rom</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>pflash</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='readonly'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>yes</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>no</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='secure'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>yes</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>no</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </loader>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   </os>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   <cpu>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <mode name='host-passthrough' supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='hostPassthroughMigratable'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>on</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>off</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </mode>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <mode name='maximum' supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='maximumMigratable'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>on</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>off</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </mode>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <mode name='host-model' supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <vendor>AMD</vendor>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='x2apic'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='tsc-deadline'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='hypervisor'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='tsc_adjust'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='spec-ctrl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='stibp'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='ssbd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='cmp_legacy'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='overflow-recov'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='succor'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='ibrs'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='amd-ssbd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='virt-ssbd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='lbrv'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='tsc-scale'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='vmcb-clean'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='flushbyasid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='pause-filter'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='pfthreshold'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='svme-addr-chk'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <feature policy='disable' name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </mode>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <mode name='custom' supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Broadwell'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Broadwell-IBRS'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Broadwell-noTSX'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Broadwell-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Broadwell-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Broadwell-v3'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Broadwell-v4'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Cascadelake-Server'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Cascadelake-Server-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Cascadelake-Server-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Cascadelake-Server-v3'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Cascadelake-Server-v4'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Cascadelake-Server-v5'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Cooperlake'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Cooperlake-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Cooperlake-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Denverton'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='mpx'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Denverton-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='mpx'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Denverton-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Denverton-v3'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Dhyana-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='EPYC-Genoa'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amd-psfd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='auto-ibrs'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='no-nested-data-bp'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='null-sel-clr-base'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='stibp-always-on'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='EPYC-Genoa-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amd-psfd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='auto-ibrs'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='no-nested-data-bp'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='null-sel-clr-base'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='stibp-always-on'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='EPYC-Milan'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='EPYC-Milan-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='EPYC-Milan-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amd-psfd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='no-nested-data-bp'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='null-sel-clr-base'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='stibp-always-on'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='EPYC-Rome'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='EPYC-Rome-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='EPYC-Rome-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='EPYC-Rome-v3'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='EPYC-v3'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='EPYC-v4'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='GraniteRapids'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-fp16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-int8'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-tile'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-fp16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fbsdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrc'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrs'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fzrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='mcdt-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pbrsb-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='prefetchiti'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='psdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='serialize'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xfd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='GraniteRapids-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-fp16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-int8'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-tile'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-fp16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fbsdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrc'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrs'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fzrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='mcdt-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pbrsb-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='prefetchiti'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='psdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='serialize'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xfd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='GraniteRapids-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-fp16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-int8'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-tile'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx10'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx10-128'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx10-256'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx10-512'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-fp16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='cldemote'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fbsdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrc'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrs'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fzrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='mcdt-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdir64b'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdiri'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pbrsb-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='prefetchiti'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='psdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='serialize'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ss'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xfd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Haswell'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Haswell-IBRS'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Haswell-noTSX'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Haswell-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Haswell-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Haswell-v3'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Haswell-v4'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Icelake-Server'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Icelake-Server-noTSX'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Icelake-Server-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Icelake-Server-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Icelake-Server-v3'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Icelake-Server-v4'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Icelake-Server-v5'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Icelake-Server-v6'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Icelake-Server-v7'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='IvyBridge'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='IvyBridge-IBRS'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='IvyBridge-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='IvyBridge-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='KnightsMill'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-4fmaps'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-4vnniw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512er'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512pf'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ss'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='KnightsMill-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-4fmaps'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-4vnniw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512er'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512pf'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ss'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Opteron_G4'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fma4'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xop'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Opteron_G4-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fma4'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xop'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Opteron_G5'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fma4'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='tbm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xop'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Opteron_G5-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fma4'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='tbm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xop'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='SapphireRapids'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-int8'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-tile'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-fp16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrc'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrs'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fzrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='serialize'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xfd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='SapphireRapids-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-int8'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-tile'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-fp16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrc'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrs'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fzrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='serialize'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xfd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='SapphireRapids-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-int8'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-tile'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-fp16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fbsdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrc'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrs'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fzrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='psdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='serialize'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xfd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='SapphireRapids-v3'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-int8'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='amx-tile'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-bf16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-fp16'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bitalg'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='cldemote'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fbsdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrc'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrs'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fzrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='la57'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdir64b'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdiri'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='psdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='serialize'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ss'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='taa-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xfd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='SierraForest'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-ne-convert'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-vnni-int8'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='cmpccxadd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fbsdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrs'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='mcdt-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pbrsb-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='psdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='serialize'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='SierraForest-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-ifma'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-ne-convert'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-vnni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx-vnni-int8'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='cmpccxadd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fbsdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='fsrs'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ibrs-all'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='mcdt-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pbrsb-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='psdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='serialize'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vaes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Client'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Client-IBRS'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Client-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Client-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Client-v3'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Client-v4'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Server'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Server-IBRS'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Server-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Server-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='hle'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='rtm'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Server-v3'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Server-v4'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Skylake-Server-v5'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512bw'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512cd'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512dq'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512f'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='avx512vl'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='invpcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pcid'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='pku'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Snowridge'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='cldemote'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='core-capability'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdir64b'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdiri'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='mpx'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='split-lock-detect'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Snowridge-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='cldemote'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='core-capability'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdir64b'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdiri'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='mpx'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='split-lock-detect'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Snowridge-v2'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='cldemote'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='core-capability'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdir64b'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdiri'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='split-lock-detect'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Snowridge-v3'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='cldemote'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='core-capability'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdir64b'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdiri'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='split-lock-detect'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='Snowridge-v4'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='cldemote'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='erms'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='gfni'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdir64b'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='movdiri'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='xsaves'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='athlon'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='3dnow'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='3dnowext'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='athlon-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='3dnow'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='3dnowext'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='core2duo'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ss'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='core2duo-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ss'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='coreduo'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ss'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='coreduo-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ss'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='n270'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ss'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='n270-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='ss'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='phenom'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='3dnow'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='3dnowext'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <blockers model='phenom-v1'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='3dnow'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <feature name='3dnowext'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </blockers>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </mode>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   </cpu>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   <memoryBacking supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <enum name='sourceType'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <value>file</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <value>anonymous</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <value>memfd</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   </memoryBacking>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   <devices>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <disk supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='diskDevice'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>disk</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>cdrom</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>floppy</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>lun</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='bus'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>fdc</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>scsi</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>virtio</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>usb</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>sata</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='model'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>virtio</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>virtio-transitional</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>virtio-non-transitional</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </disk>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <graphics supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='type'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>vnc</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>egl-headless</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>dbus</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </graphics>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <video supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='modelType'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>vga</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>cirrus</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>virtio</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>none</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>bochs</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>ramfb</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </video>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <hostdev supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='mode'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>subsystem</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='startupPolicy'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>default</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>mandatory</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>requisite</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>optional</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='subsysType'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>usb</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>pci</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>scsi</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='capsType'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='pciBackend'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </hostdev>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <rng supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='model'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>virtio</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>virtio-transitional</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>virtio-non-transitional</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='backendModel'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>random</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>egd</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>builtin</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </rng>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <filesystem supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='driverType'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>path</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>handle</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>virtiofs</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </filesystem>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <tpm supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='model'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>tpm-tis</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>tpm-crb</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='backendModel'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>emulator</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>external</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='backendVersion'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>2.0</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </tpm>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <redirdev supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='bus'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>usb</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </redirdev>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <channel supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='type'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>pty</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>unix</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </channel>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <crypto supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='model'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='type'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>qemu</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='backendModel'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>builtin</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </crypto>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <interface supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='backendType'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>default</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>passt</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </interface>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <panic supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='model'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>isa</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>hyperv</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </panic>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <console supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='type'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>null</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>vc</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>pty</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>dev</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>file</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>pipe</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>stdio</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>udp</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>tcp</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>unix</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>qemu-vdagent</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>dbus</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </console>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   </devices>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   <features>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <gic supported='no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <vmcoreinfo supported='yes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <genid supported='yes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <backingStoreInput supported='yes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <backup supported='yes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <async-teardown supported='yes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <ps2 supported='yes'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <sev supported='no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <sgx supported='no'/>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <hyperv supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='features'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>relaxed</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>vapic</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>spinlocks</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>vpindex</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>runtime</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>synic</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>stimer</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>reset</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>vendor_id</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>frequencies</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>reenlightenment</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>tlbflush</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>ipi</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>avic</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>emsr_bitmap</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>xmm_input</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <defaults>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <spinlocks>4095</spinlocks>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <stimer_direct>on</stimer_direct>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <tlbflush_direct>on</tlbflush_direct>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <tlbflush_extended>on</tlbflush_extended>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </defaults>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </hyperv>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     <launchSecurity supported='yes'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       <enum name='sectype'>
Nov 24 18:43:03 compute-0 nova_compute[269773]:         <value>tdx</value>
Nov 24 18:43:03 compute-0 nova_compute[269773]:       </enum>
Nov 24 18:43:03 compute-0 nova_compute[269773]:     </launchSecurity>
Nov 24 18:43:03 compute-0 nova_compute[269773]:   </features>
Nov 24 18:43:03 compute-0 nova_compute[269773]: </domainCapabilities>
Nov 24 18:43:03 compute-0 nova_compute[269773]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 24 18:43:03 compute-0 nova_compute[269773]: 2025-11-24 18:43:03.766 269777 DEBUG oslo_concurrency.lockutils [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 18:43:03 compute-0 nova_compute[269773]: 2025-11-24 18:43:03.771 269777 DEBUG oslo_concurrency.lockutils [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 18:43:03 compute-0 nova_compute[269773]: 2025-11-24 18:43:03.771 269777 DEBUG oslo_concurrency.lockutils [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 18:43:04 compute-0 virtqemud[270425]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 24 18:43:04 compute-0 virtqemud[270425]: hostname: compute-0
Nov 24 18:43:04 compute-0 virtqemud[270425]: End of file while reading data: Input/output error
Nov 24 18:43:04 compute-0 systemd[1]: libpod-8bfccfbfd425066c99ff87323aabc0b5530ac34ab41ffeb77e223003743eba60.scope: Deactivated successfully.
Nov 24 18:43:04 compute-0 systemd[1]: libpod-8bfccfbfd425066c99ff87323aabc0b5530ac34ab41ffeb77e223003743eba60.scope: Consumed 3.001s CPU time.
Nov 24 18:43:04 compute-0 podman[270634]: 2025-11-24 18:43:04.169955638 +0000 UTC m=+0.457391976 container died 8bfccfbfd425066c99ff87323aabc0b5530ac34ab41ffeb77e223003743eba60 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 18:43:04 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8bfccfbfd425066c99ff87323aabc0b5530ac34ab41ffeb77e223003743eba60-userdata-shm.mount: Deactivated successfully.
Nov 24 18:43:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-cfa0216af483d77ec471622a589be635ab969f169ad184c06e2ca10bd6aa7a81-merged.mount: Deactivated successfully.
Nov 24 18:43:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:43:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:43:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:43:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:43:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:43:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:43:04 compute-0 ceph-mon[74927]: pgmap v815: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:05 compute-0 podman[270634]: 2025-11-24 18:43:05.134524157 +0000 UTC m=+1.421960495 container cleanup 8bfccfbfd425066c99ff87323aabc0b5530ac34ab41ffeb77e223003743eba60 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 24 18:43:05 compute-0 podman[270634]: nova_compute
Nov 24 18:43:05 compute-0 podman[270665]: nova_compute
Nov 24 18:43:05 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 24 18:43:05 compute-0 systemd[1]: Stopped nova_compute container.
Nov 24 18:43:05 compute-0 systemd[1]: Starting nova_compute container...
Nov 24 18:43:05 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:43:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa0216af483d77ec471622a589be635ab969f169ad184c06e2ca10bd6aa7a81/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 24 18:43:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa0216af483d77ec471622a589be635ab969f169ad184c06e2ca10bd6aa7a81/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 24 18:43:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa0216af483d77ec471622a589be635ab969f169ad184c06e2ca10bd6aa7a81/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 24 18:43:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa0216af483d77ec471622a589be635ab969f169ad184c06e2ca10bd6aa7a81/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 24 18:43:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa0216af483d77ec471622a589be635ab969f169ad184c06e2ca10bd6aa7a81/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 24 18:43:05 compute-0 podman[270678]: 2025-11-24 18:43:05.322428049 +0000 UTC m=+0.092991363 container init 8bfccfbfd425066c99ff87323aabc0b5530ac34ab41ffeb77e223003743eba60 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 18:43:05 compute-0 podman[270678]: 2025-11-24 18:43:05.334659263 +0000 UTC m=+0.105222557 container start 8bfccfbfd425066c99ff87323aabc0b5530ac34ab41ffeb77e223003743eba60 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, container_name=nova_compute, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 18:43:05 compute-0 podman[270678]: nova_compute
Nov 24 18:43:05 compute-0 nova_compute[270693]: + sudo -E kolla_set_configs
Nov 24 18:43:05 compute-0 systemd[1]: Started nova_compute container.
Nov 24 18:43:05 compute-0 sudo[270624]: pam_unix(sudo:session): session closed for user root
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Validating config file
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Copying service configuration files
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Deleting /etc/ceph
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Creating directory /etc/ceph
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Setting permission for /etc/ceph
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Writing out command to execute
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 24 18:43:05 compute-0 nova_compute[270693]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 24 18:43:05 compute-0 nova_compute[270693]: ++ cat /run_command
Nov 24 18:43:05 compute-0 nova_compute[270693]: + CMD=nova-compute
Nov 24 18:43:05 compute-0 nova_compute[270693]: + ARGS=
Nov 24 18:43:05 compute-0 nova_compute[270693]: + sudo kolla_copy_cacerts
Nov 24 18:43:05 compute-0 nova_compute[270693]: + [[ ! -n '' ]]
Nov 24 18:43:05 compute-0 nova_compute[270693]: + . kolla_extend_start
Nov 24 18:43:05 compute-0 nova_compute[270693]: Running command: 'nova-compute'
Nov 24 18:43:05 compute-0 nova_compute[270693]: + echo 'Running command: '\''nova-compute'\'''
Nov 24 18:43:05 compute-0 nova_compute[270693]: + umask 0022
Nov 24 18:43:05 compute-0 nova_compute[270693]: + exec nova-compute
Nov 24 18:43:05 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v816: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:05 compute-0 sudo[270854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scqsvmqyydfgjqgsztycrckboywudyra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764009785.6060987-1566-6341676002131/AnsiballZ_podman_container.py'
Nov 24 18:43:05 compute-0 sudo[270854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:43:06 compute-0 python3.9[270856]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 24 18:43:06 compute-0 systemd[1]: Started libpod-conmon-5e27af85292a9b40c3e4241abdb9b05f5a155fee134dfa070b318e923dd00f66.scope.
Nov 24 18:43:06 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:43:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6825909f551c1b145e164649b5f4dc006e991842c39724bc87172ccccc24bb5f/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 24 18:43:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6825909f551c1b145e164649b5f4dc006e991842c39724bc87172ccccc24bb5f/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 24 18:43:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6825909f551c1b145e164649b5f4dc006e991842c39724bc87172ccccc24bb5f/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 24 18:43:06 compute-0 podman[270882]: 2025-11-24 18:43:06.460406981 +0000 UTC m=+0.137578072 container init 5e27af85292a9b40c3e4241abdb9b05f5a155fee134dfa070b318e923dd00f66 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=nova_compute_init)
Nov 24 18:43:06 compute-0 podman[270882]: 2025-11-24 18:43:06.469392725 +0000 UTC m=+0.146563766 container start 5e27af85292a9b40c3e4241abdb9b05f5a155fee134dfa070b318e923dd00f66 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=nova_compute_init, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Nov 24 18:43:06 compute-0 python3.9[270856]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 24 18:43:06 compute-0 nova_compute_init[270904]: INFO:nova_statedir:Applying nova statedir ownership
Nov 24 18:43:06 compute-0 nova_compute_init[270904]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 24 18:43:06 compute-0 nova_compute_init[270904]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 24 18:43:06 compute-0 nova_compute_init[270904]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 24 18:43:06 compute-0 nova_compute_init[270904]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 24 18:43:06 compute-0 nova_compute_init[270904]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 24 18:43:06 compute-0 nova_compute_init[270904]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 24 18:43:06 compute-0 nova_compute_init[270904]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 24 18:43:06 compute-0 nova_compute_init[270904]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 24 18:43:06 compute-0 nova_compute_init[270904]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 24 18:43:06 compute-0 nova_compute_init[270904]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 24 18:43:06 compute-0 nova_compute_init[270904]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 24 18:43:06 compute-0 nova_compute_init[270904]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 24 18:43:06 compute-0 nova_compute_init[270904]: INFO:nova_statedir:Nova statedir ownership complete
Nov 24 18:43:06 compute-0 systemd[1]: libpod-5e27af85292a9b40c3e4241abdb9b05f5a155fee134dfa070b318e923dd00f66.scope: Deactivated successfully.
Nov 24 18:43:06 compute-0 podman[270905]: 2025-11-24 18:43:06.547168899 +0000 UTC m=+0.041021391 container died 5e27af85292a9b40c3e4241abdb9b05f5a155fee134dfa070b318e923dd00f66 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 24 18:43:06 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5e27af85292a9b40c3e4241abdb9b05f5a155fee134dfa070b318e923dd00f66-userdata-shm.mount: Deactivated successfully.
Nov 24 18:43:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-6825909f551c1b145e164649b5f4dc006e991842c39724bc87172ccccc24bb5f-merged.mount: Deactivated successfully.
Nov 24 18:43:06 compute-0 podman[270914]: 2025-11-24 18:43:06.62723258 +0000 UTC m=+0.077199181 container cleanup 5e27af85292a9b40c3e4241abdb9b05f5a155fee134dfa070b318e923dd00f66 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Nov 24 18:43:06 compute-0 systemd[1]: libpod-conmon-5e27af85292a9b40c3e4241abdb9b05f5a155fee134dfa070b318e923dd00f66.scope: Deactivated successfully.
Nov 24 18:43:06 compute-0 sudo[270854]: pam_unix(sudo:session): session closed for user root
Nov 24 18:43:06 compute-0 ceph-mon[74927]: pgmap v816: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:07 compute-0 sshd-session[240405]: Connection closed by 192.168.122.30 port 48888
Nov 24 18:43:07 compute-0 sshd-session[240402]: pam_unix(sshd:session): session closed for user zuul
Nov 24 18:43:07 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Nov 24 18:43:07 compute-0 systemd[1]: session-52.scope: Consumed 2min 15.251s CPU time.
Nov 24 18:43:07 compute-0 systemd-logind[822]: Session 52 logged out. Waiting for processes to exit.
Nov 24 18:43:07 compute-0 systemd-logind[822]: Removed session 52.
Nov 24 18:43:07 compute-0 nova_compute[270693]: 2025-11-24 18:43:07.357 270697 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 24 18:43:07 compute-0 nova_compute[270693]: 2025-11-24 18:43:07.358 270697 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 24 18:43:07 compute-0 nova_compute[270693]: 2025-11-24 18:43:07.358 270697 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 24 18:43:07 compute-0 nova_compute[270693]: 2025-11-24 18:43:07.358 270697 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 24 18:43:07 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v817: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:07 compute-0 nova_compute[270693]: 2025-11-24 18:43:07.494 270697 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:43:07 compute-0 nova_compute[270693]: 2025-11-24 18:43:07.518 270697 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:43:07 compute-0 nova_compute[270693]: 2025-11-24 18:43:07.519 270697 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 24 18:43:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.158 270697 INFO nova.virt.driver [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.279 270697 INFO nova.compute.provider_config [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.294 270697 DEBUG oslo_concurrency.lockutils [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.295 270697 DEBUG oslo_concurrency.lockutils [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.295 270697 DEBUG oslo_concurrency.lockutils [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.295 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.295 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.295 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.296 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.296 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.296 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.296 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.296 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.296 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.296 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.297 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.297 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.297 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.297 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.297 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.297 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.298 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.298 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.298 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.298 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.298 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.298 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.298 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.299 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.299 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.299 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.299 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.299 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.299 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.300 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.300 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.300 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.300 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.300 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.300 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.300 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.300 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.301 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.301 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.301 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.301 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.301 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.302 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.302 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.302 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.302 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.302 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.302 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.302 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.303 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.303 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.303 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.303 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.303 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.303 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.303 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.304 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.304 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.304 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.304 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.304 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.304 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.304 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.304 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.305 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.305 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.305 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.305 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.305 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.305 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.306 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.306 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.306 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.306 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.306 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.306 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.306 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.306 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.307 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.307 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.307 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.307 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.307 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.307 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.308 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.308 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.308 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.308 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.308 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.308 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.308 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.309 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.309 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.309 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.309 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.309 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.309 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.309 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.310 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.310 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.310 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.310 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.310 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.310 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.310 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.310 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.311 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.311 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.311 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.311 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.311 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.311 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.311 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.312 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.312 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.312 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.312 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.312 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.312 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.312 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.313 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.313 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.313 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.313 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.313 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.313 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.313 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.314 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.314 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.314 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.314 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.314 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.314 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.314 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.314 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.315 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.315 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.315 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.315 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.315 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.315 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.315 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.316 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.316 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.316 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.316 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.316 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.316 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.316 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.317 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.317 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.317 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.317 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.317 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.317 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.318 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.318 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.318 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.318 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.318 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.319 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.319 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.319 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.319 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.319 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.319 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.319 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.320 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.320 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.320 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.320 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.320 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.320 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.321 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.321 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.321 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.321 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.321 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.321 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.322 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.322 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.322 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.322 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.322 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.322 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.322 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.322 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.323 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.323 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.323 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.323 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.323 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.323 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.324 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.324 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.324 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.324 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.324 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.325 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.325 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.325 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.325 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.325 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.325 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.325 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.326 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.326 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.326 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.326 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.326 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.326 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.326 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.327 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.327 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.327 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.327 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.327 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.327 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.327 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.328 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.328 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.328 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.328 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.328 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.328 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.328 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.329 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.329 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.329 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.329 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.329 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.329 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.330 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.330 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.330 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.330 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.330 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.330 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.330 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.331 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.331 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.331 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.331 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.331 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.331 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.331 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.332 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.332 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.332 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.332 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.332 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.332 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.332 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.333 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.333 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.333 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.333 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.333 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.333 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.333 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.334 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.334 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.334 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.334 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.334 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.334 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.335 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.335 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.335 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.335 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.335 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.335 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.335 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.336 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.336 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.336 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.336 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.336 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.337 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.337 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.337 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.337 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.337 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.338 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.338 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.338 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.338 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.338 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.338 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.338 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.339 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.339 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.339 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.339 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.339 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.339 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.339 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.340 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.340 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.340 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.340 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.340 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.340 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.340 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.341 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.341 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.341 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.341 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.341 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.341 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.341 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.342 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.342 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.342 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.342 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.342 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.342 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.342 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.343 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.343 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.343 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.343 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.343 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.343 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.343 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.344 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.344 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.344 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.344 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.344 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.344 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.344 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.345 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.345 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.345 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.345 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.345 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.345 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.346 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.346 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.346 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.346 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.346 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.346 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.346 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.347 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.347 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.347 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.347 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.347 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.347 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.347 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.348 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.348 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.348 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.348 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.348 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.348 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.349 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.349 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.349 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.349 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.349 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.349 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.350 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.350 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.350 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.350 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.350 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.350 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.350 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.350 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.351 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.351 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.351 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.351 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.351 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.351 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.351 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.352 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.352 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.352 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.352 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.352 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.352 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.352 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.353 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.353 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.353 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.353 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.353 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.353 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.353 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.354 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.354 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.354 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.354 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.354 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.354 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.354 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.355 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.355 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.355 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.355 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.355 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.355 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.355 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.356 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.356 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.356 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.356 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.356 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.356 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.356 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.356 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.357 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.357 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.357 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.357 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.357 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.357 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.357 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.358 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.358 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.358 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.358 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.358 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.358 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.358 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.359 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.359 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.359 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.359 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.359 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.359 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.359 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.360 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.360 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.360 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.360 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.360 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.360 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.360 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.360 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.361 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.361 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.361 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.361 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.361 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.361 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.362 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.362 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.362 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.362 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.362 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.362 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.362 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.363 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.363 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.363 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.363 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.363 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.363 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.364 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.364 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.364 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.364 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.364 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.364 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.364 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.365 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.365 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.365 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.365 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.365 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.365 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.365 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.366 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.366 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.366 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.366 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.366 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.366 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.366 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.367 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.367 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.367 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.367 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.367 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.367 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.367 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.368 270697 WARNING oslo_config.cfg [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 24 18:43:08 compute-0 nova_compute[270693]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 24 18:43:08 compute-0 nova_compute[270693]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 24 18:43:08 compute-0 nova_compute[270693]: and ``live_migration_inbound_addr`` respectively.
Nov 24 18:43:08 compute-0 nova_compute[270693]: ).  Its value may be silently ignored in the future.
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.368 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.368 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.368 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.368 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.368 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.369 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.369 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.369 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.369 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.369 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.369 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.369 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.370 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.370 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.370 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.370 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.370 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.370 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.371 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.rbd_secret_uuid        = e5ee928f-099b-569b-93c9-ecf025cbb50d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.371 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.371 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.371 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.371 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.371 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.371 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.372 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.372 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.372 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.372 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.372 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.372 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.373 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.373 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.373 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.373 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.373 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.373 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.373 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.374 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.374 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.374 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.374 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.374 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.374 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.374 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.375 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.375 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.375 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.375 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.375 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.375 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.375 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.376 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.376 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.376 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.376 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.376 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.376 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.376 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.376 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.377 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.377 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.377 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.377 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.377 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.377 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.377 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.378 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.378 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.378 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.378 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.378 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.378 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.378 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.379 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.379 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.379 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.379 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.379 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.379 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.379 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.380 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.380 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.380 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.380 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.380 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.380 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.380 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.381 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.381 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.381 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.381 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.381 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.381 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.381 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.382 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.382 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.382 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.382 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.382 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.382 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.382 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.382 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.383 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.383 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.383 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.383 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.383 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.383 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.384 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.384 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.384 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.384 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.384 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.384 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.384 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.385 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.385 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.385 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.385 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.385 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.385 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.385 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.386 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.386 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.386 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.386 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.386 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.386 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.387 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.387 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.387 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.387 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.387 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.387 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.387 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.388 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.388 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.388 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.388 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.388 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.388 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.389 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.389 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.389 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.389 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.389 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.389 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.389 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.390 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.390 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.390 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.390 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.390 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.390 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.390 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.391 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.391 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.391 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.391 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.391 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.391 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.392 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.392 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.392 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.392 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.392 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.392 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.392 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.393 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.393 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.393 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.393 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.393 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.393 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.393 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.394 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.394 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.394 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.394 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.394 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.394 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.394 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.395 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.395 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.395 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.395 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.395 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.395 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.396 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.396 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.396 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.396 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.396 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.396 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.397 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.397 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.397 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.397 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.397 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.397 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.398 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.398 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.398 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.398 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.398 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.398 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.398 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.399 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.399 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.399 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.399 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.399 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.399 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.399 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.400 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.400 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.400 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.400 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.400 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.400 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.400 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.401 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.401 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.401 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.401 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.401 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.401 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.401 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.402 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.402 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.402 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.402 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.402 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.402 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.403 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.403 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.403 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.403 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.403 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.403 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.403 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.403 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.404 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.404 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.404 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.404 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.404 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.404 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.405 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.405 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.405 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.405 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.405 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.405 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.406 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.406 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.406 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.406 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.406 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.406 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.406 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.406 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.407 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.407 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.407 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.407 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.407 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.407 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.407 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.408 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.408 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.408 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.408 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.408 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.408 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.408 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.408 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.409 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.409 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.409 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.409 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.409 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.409 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.409 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.410 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.410 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.410 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.410 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.410 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.410 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.410 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.411 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.411 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.411 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.411 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.411 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.411 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.411 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.412 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.412 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.412 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.412 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.412 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.412 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.412 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.413 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.413 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.413 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.413 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.413 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.413 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.414 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.414 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.414 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.414 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.414 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.414 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.414 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.415 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.415 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.415 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.415 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.415 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.415 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.415 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.416 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.416 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.416 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.416 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.416 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.416 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.416 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.417 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.417 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.417 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.417 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.417 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.417 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.418 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.418 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.418 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.418 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.418 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.418 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.418 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.419 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.419 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.419 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.419 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.419 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.419 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.419 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.420 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.420 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.420 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.420 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.420 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.420 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.420 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.421 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.421 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.421 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.421 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.421 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.421 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.421 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.422 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.422 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.422 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.422 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.422 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.422 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.422 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.422 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.423 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.423 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.423 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.423 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.423 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.423 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.423 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.424 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.424 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.424 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.424 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.424 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.424 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.424 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.425 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.425 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.425 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.425 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.425 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.425 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.425 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.426 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.426 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.426 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.426 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.426 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.426 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.427 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.427 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.427 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.427 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.427 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.427 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.427 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.427 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.428 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.428 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.428 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.428 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.428 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.428 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.428 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.429 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.429 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.429 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.429 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.429 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.429 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.430 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.431 270697 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.447 270697 DEBUG nova.virt.libvirt.host [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.448 270697 DEBUG nova.virt.libvirt.host [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.448 270697 DEBUG nova.virt.libvirt.host [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.449 270697 DEBUG nova.virt.libvirt.host [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.462 270697 DEBUG nova.virt.libvirt.host [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fa09c0bfa30> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.464 270697 DEBUG nova.virt.libvirt.host [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fa09c0bfa30> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.464 270697 INFO nova.virt.libvirt.driver [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Connection event '1' reason 'None'
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.472 270697 INFO nova.virt.libvirt.host [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Libvirt host capabilities <capabilities>
Nov 24 18:43:08 compute-0 nova_compute[270693]: 
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <host>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <uuid>ce8f254e-4b98-4140-abc7-8040b35476ad</uuid>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <cpu>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <arch>x86_64</arch>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model>EPYC-Rome-v4</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <vendor>AMD</vendor>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <microcode version='16777317'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <signature family='23' model='49' stepping='0'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <maxphysaddr mode='emulate' bits='40'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature name='x2apic'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature name='tsc-deadline'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature name='osxsave'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature name='hypervisor'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature name='tsc_adjust'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature name='spec-ctrl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature name='stibp'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature name='arch-capabilities'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature name='ssbd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature name='cmp_legacy'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature name='topoext'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature name='virt-ssbd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature name='lbrv'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature name='tsc-scale'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature name='vmcb-clean'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature name='pause-filter'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature name='pfthreshold'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature name='svme-addr-chk'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature name='rdctl-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature name='skip-l1dfl-vmentry'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature name='mds-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature name='pschange-mc-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <pages unit='KiB' size='4'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <pages unit='KiB' size='2048'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <pages unit='KiB' size='1048576'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </cpu>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <power_management>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <suspend_mem/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </power_management>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <iommu support='no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <migration_features>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <live/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <uri_transports>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <uri_transport>tcp</uri_transport>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <uri_transport>rdma</uri_transport>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </uri_transports>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </migration_features>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <topology>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <cells num='1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <cell id='0'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:           <memory unit='KiB'>7864320</memory>
Nov 24 18:43:08 compute-0 nova_compute[270693]:           <pages unit='KiB' size='4'>1966080</pages>
Nov 24 18:43:08 compute-0 nova_compute[270693]:           <pages unit='KiB' size='2048'>0</pages>
Nov 24 18:43:08 compute-0 nova_compute[270693]:           <pages unit='KiB' size='1048576'>0</pages>
Nov 24 18:43:08 compute-0 nova_compute[270693]:           <distances>
Nov 24 18:43:08 compute-0 nova_compute[270693]:             <sibling id='0' value='10'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:           </distances>
Nov 24 18:43:08 compute-0 nova_compute[270693]:           <cpus num='8'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:           </cpus>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         </cell>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </cells>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </topology>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <cache>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </cache>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <secmodel>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model>selinux</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <doi>0</doi>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </secmodel>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <secmodel>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model>dac</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <doi>0</doi>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <baselabel type='kvm'>+107:+107</baselabel>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <baselabel type='qemu'>+107:+107</baselabel>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </secmodel>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   </host>
Nov 24 18:43:08 compute-0 nova_compute[270693]: 
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <guest>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <os_type>hvm</os_type>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <arch name='i686'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <wordsize>32</wordsize>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <domain type='qemu'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <domain type='kvm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </arch>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <features>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <pae/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <nonpae/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <acpi default='on' toggle='yes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <apic default='on' toggle='no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <cpuselection/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <deviceboot/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <disksnapshot default='on' toggle='no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <externalSnapshot/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </features>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   </guest>
Nov 24 18:43:08 compute-0 nova_compute[270693]: 
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <guest>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <os_type>hvm</os_type>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <arch name='x86_64'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <wordsize>64</wordsize>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <domain type='qemu'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <domain type='kvm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </arch>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <features>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <acpi default='on' toggle='yes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <apic default='on' toggle='no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <cpuselection/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <deviceboot/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <disksnapshot default='on' toggle='no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <externalSnapshot/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </features>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   </guest>
Nov 24 18:43:08 compute-0 nova_compute[270693]: 
Nov 24 18:43:08 compute-0 nova_compute[270693]: </capabilities>
Nov 24 18:43:08 compute-0 nova_compute[270693]: 
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.479 270697 WARNING nova.virt.libvirt.driver [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.480 270697 DEBUG nova.virt.libvirt.volume.mount [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.481 270697 DEBUG nova.virt.libvirt.host [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.487 270697 DEBUG nova.virt.libvirt.host [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 24 18:43:08 compute-0 nova_compute[270693]: <domainCapabilities>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <path>/usr/libexec/qemu-kvm</path>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <domain>kvm</domain>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <arch>i686</arch>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <vcpu max='4096'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <iothreads supported='yes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <os supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <enum name='firmware'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <loader supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='type'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>rom</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>pflash</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='readonly'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>yes</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>no</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='secure'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>no</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </loader>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   </os>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <cpu>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <mode name='host-passthrough' supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='hostPassthroughMigratable'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>on</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>off</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </mode>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <mode name='maximum' supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='maximumMigratable'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>on</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>off</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </mode>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <mode name='host-model' supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <vendor>AMD</vendor>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='x2apic'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='tsc-deadline'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='hypervisor'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='tsc_adjust'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='spec-ctrl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='stibp'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='ssbd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='cmp_legacy'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='overflow-recov'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='succor'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='ibrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='amd-ssbd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='virt-ssbd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='lbrv'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='tsc-scale'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='vmcb-clean'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='flushbyasid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='pause-filter'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='pfthreshold'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='svme-addr-chk'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='disable' name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </mode>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <mode name='custom' supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Broadwell'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Broadwell-IBRS'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Broadwell-noTSX'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Broadwell-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Broadwell-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Broadwell-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Broadwell-v4'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Cascadelake-Server'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Cascadelake-Server-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Cascadelake-Server-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Cascadelake-Server-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Cascadelake-Server-v4'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Cascadelake-Server-v5'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Cooperlake'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Cooperlake-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Cooperlake-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Denverton'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='mpx'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Denverton-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='mpx'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Denverton-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Denverton-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Dhyana-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-Genoa'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amd-psfd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='auto-ibrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='no-nested-data-bp'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='null-sel-clr-base'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='stibp-always-on'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-Genoa-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amd-psfd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='auto-ibrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='no-nested-data-bp'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='null-sel-clr-base'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='stibp-always-on'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-Milan'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-Milan-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-Milan-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amd-psfd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='no-nested-data-bp'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='null-sel-clr-base'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='stibp-always-on'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-Rome'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-Rome-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-Rome-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-Rome-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-v4'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='GraniteRapids'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-fp16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-int8'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-tile'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-fp16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fbsdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrc'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fzrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='mcdt-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pbrsb-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='prefetchiti'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='psdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='serialize'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xfd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='GraniteRapids-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-fp16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-int8'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-tile'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-fp16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fbsdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrc'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fzrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='mcdt-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pbrsb-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='prefetchiti'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='psdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='serialize'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xfd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='GraniteRapids-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-fp16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-int8'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-tile'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx10'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx10-128'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx10-256'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx10-512'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-fp16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='cldemote'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fbsdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrc'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fzrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='mcdt-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdir64b'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdiri'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pbrsb-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='prefetchiti'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='psdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='serialize'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ss'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xfd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Haswell'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Haswell-IBRS'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Haswell-noTSX'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Haswell-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Haswell-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Haswell-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Haswell-v4'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Icelake-Server'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Icelake-Server-noTSX'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Icelake-Server-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Icelake-Server-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Icelake-Server-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Icelake-Server-v4'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Icelake-Server-v5'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Icelake-Server-v6'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Icelake-Server-v7'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='IvyBridge'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='IvyBridge-IBRS'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='IvyBridge-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='IvyBridge-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='KnightsMill'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-4fmaps'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-4vnniw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512er'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512pf'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ss'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='KnightsMill-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-4fmaps'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-4vnniw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512er'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512pf'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ss'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Opteron_G4'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fma4'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xop'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Opteron_G4-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fma4'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xop'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Opteron_G5'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fma4'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='tbm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xop'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Opteron_G5-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fma4'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='tbm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xop'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='SapphireRapids'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-int8'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-tile'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-fp16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrc'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fzrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='serialize'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xfd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='SapphireRapids-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-int8'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-tile'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-fp16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrc'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fzrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='serialize'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xfd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='SapphireRapids-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-int8'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-tile'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-fp16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fbsdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrc'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fzrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='psdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='serialize'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xfd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='SapphireRapids-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-int8'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-tile'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-fp16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='cldemote'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fbsdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrc'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fzrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdir64b'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdiri'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='psdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='serialize'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ss'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xfd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='SierraForest'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-ne-convert'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni-int8'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='cmpccxadd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fbsdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='mcdt-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pbrsb-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='psdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='serialize'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='SierraForest-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-ne-convert'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni-int8'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='cmpccxadd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fbsdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='mcdt-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pbrsb-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='psdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='serialize'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Client'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Client-IBRS'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Client-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Client-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Client-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Client-v4'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Server'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Server-IBRS'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Server-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Server-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Server-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Server-v4'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Server-v5'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Snowridge'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='cldemote'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='core-capability'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdir64b'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdiri'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='mpx'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='split-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Snowridge-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='cldemote'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='core-capability'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdir64b'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdiri'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='mpx'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='split-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Snowridge-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='cldemote'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='core-capability'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdir64b'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdiri'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='split-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Snowridge-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='cldemote'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='core-capability'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdir64b'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdiri'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='split-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Snowridge-v4'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='cldemote'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdir64b'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdiri'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='athlon'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='3dnow'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='3dnowext'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='athlon-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='3dnow'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='3dnowext'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='core2duo'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ss'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='core2duo-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ss'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='coreduo'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ss'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='coreduo-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ss'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='n270'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ss'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='n270-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ss'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='phenom'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='3dnow'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='3dnowext'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='phenom-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='3dnow'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='3dnowext'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </mode>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   </cpu>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <memoryBacking supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <enum name='sourceType'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <value>file</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <value>anonymous</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <value>memfd</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   </memoryBacking>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <devices>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <disk supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='diskDevice'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>disk</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>cdrom</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>floppy</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>lun</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='bus'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>fdc</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>scsi</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>virtio</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>usb</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>sata</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='model'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>virtio</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>virtio-transitional</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>virtio-non-transitional</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </disk>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <graphics supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='type'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>vnc</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>egl-headless</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>dbus</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </graphics>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <video supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='modelType'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>vga</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>cirrus</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>virtio</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>none</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>bochs</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>ramfb</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </video>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <hostdev supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='mode'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>subsystem</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='startupPolicy'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>default</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>mandatory</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>requisite</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>optional</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='subsysType'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>usb</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>pci</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>scsi</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='capsType'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='pciBackend'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </hostdev>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <rng supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='model'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>virtio</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>virtio-transitional</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>virtio-non-transitional</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='backendModel'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>random</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>egd</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>builtin</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </rng>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <filesystem supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='driverType'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>path</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>handle</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>virtiofs</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </filesystem>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <tpm supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='model'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>tpm-tis</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>tpm-crb</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='backendModel'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>emulator</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>external</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='backendVersion'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>2.0</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </tpm>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <redirdev supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='bus'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>usb</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </redirdev>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <channel supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='type'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>pty</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>unix</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </channel>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <crypto supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='model'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='type'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>qemu</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='backendModel'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>builtin</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </crypto>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <interface supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='backendType'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>default</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>passt</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </interface>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <panic supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='model'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>isa</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>hyperv</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </panic>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <console supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='type'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>null</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>vc</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>pty</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>dev</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>file</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>pipe</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>stdio</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>udp</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>tcp</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>unix</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>qemu-vdagent</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>dbus</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </console>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   </devices>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <features>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <gic supported='no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <vmcoreinfo supported='yes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <genid supported='yes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <backingStoreInput supported='yes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <backup supported='yes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <async-teardown supported='yes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <ps2 supported='yes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <sev supported='no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <sgx supported='no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <hyperv supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='features'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>relaxed</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>vapic</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>spinlocks</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>vpindex</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>runtime</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>synic</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>stimer</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>reset</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>vendor_id</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>frequencies</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>reenlightenment</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>tlbflush</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>ipi</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>avic</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>emsr_bitmap</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>xmm_input</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <defaults>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <spinlocks>4095</spinlocks>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <stimer_direct>on</stimer_direct>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <tlbflush_direct>on</tlbflush_direct>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <tlbflush_extended>on</tlbflush_extended>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </defaults>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </hyperv>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <launchSecurity supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='sectype'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>tdx</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </launchSecurity>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   </features>
Nov 24 18:43:08 compute-0 nova_compute[270693]: </domainCapabilities>
Nov 24 18:43:08 compute-0 nova_compute[270693]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.492 270697 DEBUG nova.virt.libvirt.host [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 24 18:43:08 compute-0 nova_compute[270693]: <domainCapabilities>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <path>/usr/libexec/qemu-kvm</path>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <domain>kvm</domain>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <arch>i686</arch>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <vcpu max='240'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <iothreads supported='yes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <os supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <enum name='firmware'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <loader supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='type'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>rom</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>pflash</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='readonly'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>yes</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>no</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='secure'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>no</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </loader>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   </os>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <cpu>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <mode name='host-passthrough' supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='hostPassthroughMigratable'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>on</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>off</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </mode>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <mode name='maximum' supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='maximumMigratable'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>on</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>off</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </mode>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <mode name='host-model' supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <vendor>AMD</vendor>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='x2apic'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='tsc-deadline'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='hypervisor'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='tsc_adjust'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='spec-ctrl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='stibp'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='ssbd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='cmp_legacy'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='overflow-recov'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='succor'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='ibrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='amd-ssbd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='virt-ssbd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='lbrv'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='tsc-scale'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='vmcb-clean'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='flushbyasid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='pause-filter'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='pfthreshold'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='svme-addr-chk'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='disable' name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </mode>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <mode name='custom' supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Broadwell'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Broadwell-IBRS'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Broadwell-noTSX'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Broadwell-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Broadwell-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Broadwell-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Broadwell-v4'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Cascadelake-Server'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Cascadelake-Server-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Cascadelake-Server-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Cascadelake-Server-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Cascadelake-Server-v4'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Cascadelake-Server-v5'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Cooperlake'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Cooperlake-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Cooperlake-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Denverton'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='mpx'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Denverton-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='mpx'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Denverton-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Denverton-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Dhyana-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-Genoa'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amd-psfd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='auto-ibrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='no-nested-data-bp'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='null-sel-clr-base'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='stibp-always-on'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-Genoa-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amd-psfd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='auto-ibrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='no-nested-data-bp'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='null-sel-clr-base'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='stibp-always-on'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-Milan'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-Milan-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-Milan-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amd-psfd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='no-nested-data-bp'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='null-sel-clr-base'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='stibp-always-on'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-Rome'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-Rome-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-Rome-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-Rome-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-v4'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='GraniteRapids'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-fp16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-int8'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-tile'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-fp16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fbsdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrc'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fzrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='mcdt-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pbrsb-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='prefetchiti'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='psdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='serialize'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xfd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='GraniteRapids-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-fp16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-int8'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-tile'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-fp16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fbsdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrc'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fzrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='mcdt-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pbrsb-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='prefetchiti'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='psdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='serialize'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xfd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='GraniteRapids-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-fp16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-int8'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-tile'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx10'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx10-128'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx10-256'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx10-512'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-fp16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='cldemote'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fbsdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrc'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fzrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='mcdt-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdir64b'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdiri'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pbrsb-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='prefetchiti'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='psdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='serialize'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ss'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xfd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Haswell'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Haswell-IBRS'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Haswell-noTSX'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Haswell-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Haswell-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Haswell-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Haswell-v4'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Icelake-Server'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Icelake-Server-noTSX'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Icelake-Server-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Icelake-Server-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Icelake-Server-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Icelake-Server-v4'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Icelake-Server-v5'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Icelake-Server-v6'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Icelake-Server-v7'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='IvyBridge'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='IvyBridge-IBRS'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='IvyBridge-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='IvyBridge-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='KnightsMill'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-4fmaps'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-4vnniw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512er'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512pf'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ss'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='KnightsMill-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-4fmaps'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-4vnniw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512er'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512pf'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ss'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Opteron_G4'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fma4'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xop'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Opteron_G4-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fma4'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xop'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Opteron_G5'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fma4'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='tbm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xop'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Opteron_G5-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fma4'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='tbm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xop'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='SapphireRapids'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-int8'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-tile'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-fp16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrc'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fzrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='serialize'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xfd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='SapphireRapids-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-int8'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-tile'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-fp16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrc'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fzrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='serialize'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xfd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='SapphireRapids-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-int8'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-tile'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-fp16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fbsdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrc'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fzrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='psdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='serialize'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xfd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='SapphireRapids-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-int8'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-tile'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-fp16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='cldemote'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fbsdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrc'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fzrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdir64b'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdiri'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='psdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='serialize'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ss'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xfd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='SierraForest'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-ne-convert'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni-int8'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='cmpccxadd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fbsdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='mcdt-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pbrsb-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='psdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='serialize'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='SierraForest-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-ne-convert'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni-int8'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='cmpccxadd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fbsdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='mcdt-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pbrsb-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='psdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='serialize'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Client'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Client-IBRS'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Client-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Client-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Client-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Client-v4'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Server'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Server-IBRS'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Server-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Server-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Server-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Server-v4'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Server-v5'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Snowridge'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='cldemote'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='core-capability'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdir64b'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdiri'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='mpx'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='split-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Snowridge-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='cldemote'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='core-capability'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdir64b'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdiri'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='mpx'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='split-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Snowridge-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='cldemote'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='core-capability'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdir64b'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdiri'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='split-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Snowridge-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='cldemote'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='core-capability'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdir64b'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdiri'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='split-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Snowridge-v4'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='cldemote'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdir64b'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdiri'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='athlon'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='3dnow'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='3dnowext'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='athlon-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='3dnow'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='3dnowext'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='core2duo'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ss'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='core2duo-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ss'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='coreduo'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ss'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='coreduo-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ss'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='n270'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ss'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='n270-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ss'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='phenom'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='3dnow'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='3dnowext'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='phenom-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='3dnow'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='3dnowext'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </mode>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   </cpu>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <memoryBacking supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <enum name='sourceType'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <value>file</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <value>anonymous</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <value>memfd</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   </memoryBacking>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <devices>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <disk supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='diskDevice'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>disk</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>cdrom</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>floppy</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>lun</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='bus'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>ide</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>fdc</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>scsi</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>virtio</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>usb</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>sata</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='model'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>virtio</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>virtio-transitional</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>virtio-non-transitional</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </disk>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <graphics supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='type'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>vnc</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>egl-headless</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>dbus</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </graphics>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <video supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='modelType'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>vga</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>cirrus</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>virtio</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>none</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>bochs</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>ramfb</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </video>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <hostdev supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='mode'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>subsystem</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='startupPolicy'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>default</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>mandatory</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>requisite</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>optional</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='subsysType'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>usb</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>pci</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>scsi</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='capsType'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='pciBackend'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </hostdev>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <rng supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='model'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>virtio</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>virtio-transitional</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>virtio-non-transitional</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='backendModel'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>random</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>egd</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>builtin</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </rng>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <filesystem supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='driverType'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>path</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>handle</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>virtiofs</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </filesystem>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <tpm supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='model'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>tpm-tis</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>tpm-crb</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='backendModel'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>emulator</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>external</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='backendVersion'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>2.0</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </tpm>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <redirdev supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='bus'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>usb</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </redirdev>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <channel supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='type'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>pty</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>unix</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </channel>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <crypto supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='model'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='type'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>qemu</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='backendModel'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>builtin</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </crypto>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <interface supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='backendType'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>default</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>passt</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </interface>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <panic supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='model'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>isa</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>hyperv</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </panic>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <console supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='type'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>null</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>vc</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>pty</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>dev</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>file</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>pipe</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>stdio</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>udp</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>tcp</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>unix</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>qemu-vdagent</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>dbus</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </console>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   </devices>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <features>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <gic supported='no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <vmcoreinfo supported='yes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <genid supported='yes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <backingStoreInput supported='yes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <backup supported='yes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <async-teardown supported='yes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <ps2 supported='yes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <sev supported='no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <sgx supported='no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <hyperv supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='features'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>relaxed</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>vapic</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>spinlocks</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>vpindex</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>runtime</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>synic</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>stimer</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>reset</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>vendor_id</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>frequencies</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>reenlightenment</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>tlbflush</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>ipi</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>avic</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>emsr_bitmap</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>xmm_input</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <defaults>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <spinlocks>4095</spinlocks>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <stimer_direct>on</stimer_direct>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <tlbflush_direct>on</tlbflush_direct>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <tlbflush_extended>on</tlbflush_extended>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </defaults>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </hyperv>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <launchSecurity supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='sectype'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>tdx</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </launchSecurity>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   </features>
Nov 24 18:43:08 compute-0 nova_compute[270693]: </domainCapabilities>
Nov 24 18:43:08 compute-0 nova_compute[270693]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.516 270697 DEBUG nova.virt.libvirt.host [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.521 270697 DEBUG nova.virt.libvirt.host [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 24 18:43:08 compute-0 nova_compute[270693]: <domainCapabilities>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <path>/usr/libexec/qemu-kvm</path>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <domain>kvm</domain>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <arch>x86_64</arch>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <vcpu max='4096'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <iothreads supported='yes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <os supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <enum name='firmware'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <value>efi</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <loader supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='type'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>rom</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>pflash</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='readonly'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>yes</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>no</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='secure'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>yes</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>no</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </loader>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   </os>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <cpu>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <mode name='host-passthrough' supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='hostPassthroughMigratable'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>on</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>off</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </mode>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <mode name='maximum' supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='maximumMigratable'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>on</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>off</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </mode>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <mode name='host-model' supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <vendor>AMD</vendor>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='x2apic'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='tsc-deadline'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='hypervisor'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='tsc_adjust'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='spec-ctrl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='stibp'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='ssbd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='cmp_legacy'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='overflow-recov'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='succor'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='ibrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='amd-ssbd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='virt-ssbd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='lbrv'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='tsc-scale'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='vmcb-clean'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='flushbyasid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='pause-filter'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='pfthreshold'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='svme-addr-chk'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='disable' name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </mode>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <mode name='custom' supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Broadwell'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Broadwell-IBRS'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Broadwell-noTSX'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Broadwell-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Broadwell-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Broadwell-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Broadwell-v4'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Cascadelake-Server'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Cascadelake-Server-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Cascadelake-Server-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Cascadelake-Server-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Cascadelake-Server-v4'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Cascadelake-Server-v5'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Cooperlake'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Cooperlake-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Cooperlake-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Denverton'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='mpx'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Denverton-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='mpx'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Denverton-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Denverton-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Dhyana-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-Genoa'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amd-psfd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='auto-ibrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='no-nested-data-bp'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='null-sel-clr-base'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='stibp-always-on'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-Genoa-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amd-psfd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='auto-ibrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='no-nested-data-bp'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='null-sel-clr-base'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='stibp-always-on'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-Milan'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-Milan-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-Milan-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amd-psfd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='no-nested-data-bp'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='null-sel-clr-base'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='stibp-always-on'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-Rome'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-Rome-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-Rome-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-Rome-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-v4'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='GraniteRapids'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-fp16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-int8'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-tile'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-fp16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fbsdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrc'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fzrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='mcdt-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pbrsb-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='prefetchiti'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='psdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='serialize'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xfd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='GraniteRapids-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-fp16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-int8'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-tile'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-fp16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fbsdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrc'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fzrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='mcdt-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pbrsb-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='prefetchiti'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='psdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='serialize'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xfd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='GraniteRapids-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-fp16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-int8'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-tile'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx10'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx10-128'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx10-256'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx10-512'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-fp16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='cldemote'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fbsdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrc'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fzrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='mcdt-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdir64b'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdiri'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pbrsb-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='prefetchiti'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='psdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='serialize'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ss'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xfd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Haswell'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Haswell-IBRS'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Haswell-noTSX'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Haswell-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Haswell-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Haswell-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Haswell-v4'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Icelake-Server'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Icelake-Server-noTSX'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Icelake-Server-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Icelake-Server-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Icelake-Server-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Icelake-Server-v4'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Icelake-Server-v5'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Icelake-Server-v6'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Icelake-Server-v7'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='IvyBridge'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='IvyBridge-IBRS'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='IvyBridge-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='IvyBridge-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='KnightsMill'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-4fmaps'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-4vnniw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512er'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512pf'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ss'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='KnightsMill-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-4fmaps'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-4vnniw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512er'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512pf'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ss'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Opteron_G4'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fma4'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xop'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Opteron_G4-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fma4'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xop'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Opteron_G5'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fma4'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='tbm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xop'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Opteron_G5-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fma4'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='tbm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xop'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='SapphireRapids'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-int8'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-tile'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-fp16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrc'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fzrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='serialize'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xfd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='SapphireRapids-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-int8'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-tile'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-fp16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrc'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fzrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='serialize'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xfd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='SapphireRapids-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-int8'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-tile'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-fp16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fbsdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrc'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fzrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='psdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='serialize'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xfd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='SapphireRapids-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-int8'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-tile'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-fp16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='cldemote'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fbsdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrc'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fzrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdir64b'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdiri'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='psdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='serialize'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ss'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xfd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='SierraForest'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-ne-convert'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni-int8'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='cmpccxadd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fbsdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='mcdt-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pbrsb-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='psdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='serialize'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='SierraForest-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-ne-convert'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni-int8'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='cmpccxadd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fbsdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='mcdt-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pbrsb-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='psdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='serialize'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Client'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Client-IBRS'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Client-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Client-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Client-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Client-v4'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Server'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Server-IBRS'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Server-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Server-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Server-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Server-v4'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Server-v5'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Snowridge'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='cldemote'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='core-capability'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdir64b'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdiri'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='mpx'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='split-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Snowridge-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='cldemote'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='core-capability'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdir64b'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdiri'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='mpx'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='split-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Snowridge-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='cldemote'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='core-capability'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdir64b'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdiri'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='split-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Snowridge-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='cldemote'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='core-capability'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdir64b'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdiri'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='split-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Snowridge-v4'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='cldemote'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdir64b'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdiri'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='athlon'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='3dnow'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='3dnowext'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='athlon-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='3dnow'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='3dnowext'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='core2duo'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ss'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='core2duo-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ss'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='coreduo'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ss'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='coreduo-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ss'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='n270'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ss'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='n270-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ss'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='phenom'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='3dnow'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='3dnowext'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='phenom-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='3dnow'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='3dnowext'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </mode>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   </cpu>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <memoryBacking supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <enum name='sourceType'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <value>file</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <value>anonymous</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <value>memfd</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   </memoryBacking>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <devices>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <disk supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='diskDevice'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>disk</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>cdrom</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>floppy</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>lun</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='bus'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>fdc</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>scsi</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>virtio</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>usb</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>sata</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='model'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>virtio</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>virtio-transitional</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>virtio-non-transitional</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </disk>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <graphics supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='type'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>vnc</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>egl-headless</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>dbus</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </graphics>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <video supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='modelType'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>vga</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>cirrus</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>virtio</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>none</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>bochs</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>ramfb</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </video>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <hostdev supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='mode'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>subsystem</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='startupPolicy'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>default</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>mandatory</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>requisite</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>optional</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='subsysType'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>usb</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>pci</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>scsi</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='capsType'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='pciBackend'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </hostdev>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <rng supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='model'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>virtio</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>virtio-transitional</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>virtio-non-transitional</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='backendModel'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>random</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>egd</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>builtin</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </rng>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <filesystem supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='driverType'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>path</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>handle</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>virtiofs</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </filesystem>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <tpm supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='model'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>tpm-tis</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>tpm-crb</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='backendModel'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>emulator</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>external</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='backendVersion'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>2.0</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </tpm>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <redirdev supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='bus'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>usb</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </redirdev>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <channel supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='type'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>pty</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>unix</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </channel>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <crypto supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='model'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='type'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>qemu</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='backendModel'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>builtin</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </crypto>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <interface supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='backendType'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>default</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>passt</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </interface>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <panic supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='model'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>isa</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>hyperv</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </panic>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <console supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='type'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>null</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>vc</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>pty</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>dev</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>file</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>pipe</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>stdio</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>udp</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>tcp</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>unix</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>qemu-vdagent</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>dbus</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </console>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   </devices>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <features>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <gic supported='no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <vmcoreinfo supported='yes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <genid supported='yes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <backingStoreInput supported='yes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <backup supported='yes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <async-teardown supported='yes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <ps2 supported='yes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <sev supported='no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <sgx supported='no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <hyperv supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='features'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>relaxed</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>vapic</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>spinlocks</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>vpindex</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>runtime</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>synic</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>stimer</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>reset</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>vendor_id</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>frequencies</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>reenlightenment</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>tlbflush</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>ipi</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>avic</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>emsr_bitmap</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>xmm_input</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <defaults>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <spinlocks>4095</spinlocks>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <stimer_direct>on</stimer_direct>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <tlbflush_direct>on</tlbflush_direct>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <tlbflush_extended>on</tlbflush_extended>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </defaults>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </hyperv>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <launchSecurity supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='sectype'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>tdx</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </launchSecurity>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   </features>
Nov 24 18:43:08 compute-0 nova_compute[270693]: </domainCapabilities>
Nov 24 18:43:08 compute-0 nova_compute[270693]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.575 270697 DEBUG nova.virt.libvirt.host [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 24 18:43:08 compute-0 nova_compute[270693]: <domainCapabilities>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <path>/usr/libexec/qemu-kvm</path>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <domain>kvm</domain>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <arch>x86_64</arch>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <vcpu max='240'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <iothreads supported='yes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <os supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <enum name='firmware'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <loader supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='type'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>rom</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>pflash</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='readonly'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>yes</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>no</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='secure'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>no</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </loader>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   </os>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <cpu>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <mode name='host-passthrough' supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='hostPassthroughMigratable'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>on</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>off</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </mode>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <mode name='maximum' supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='maximumMigratable'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>on</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>off</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </mode>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <mode name='host-model' supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <vendor>AMD</vendor>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='x2apic'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='tsc-deadline'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='hypervisor'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='tsc_adjust'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='spec-ctrl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='stibp'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='ssbd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='cmp_legacy'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='overflow-recov'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='succor'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='ibrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='amd-ssbd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='virt-ssbd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='lbrv'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='tsc-scale'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='vmcb-clean'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='flushbyasid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='pause-filter'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='pfthreshold'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='svme-addr-chk'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <feature policy='disable' name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </mode>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <mode name='custom' supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Broadwell'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Broadwell-IBRS'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Broadwell-noTSX'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Broadwell-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Broadwell-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Broadwell-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Broadwell-v4'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Cascadelake-Server'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Cascadelake-Server-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Cascadelake-Server-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Cascadelake-Server-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Cascadelake-Server-v4'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Cascadelake-Server-v5'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Cooperlake'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Cooperlake-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Cooperlake-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Denverton'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='mpx'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Denverton-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='mpx'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Denverton-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Denverton-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Dhyana-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-Genoa'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amd-psfd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='auto-ibrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='no-nested-data-bp'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='null-sel-clr-base'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='stibp-always-on'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-Genoa-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amd-psfd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='auto-ibrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='no-nested-data-bp'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='null-sel-clr-base'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='stibp-always-on'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-Milan'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-Milan-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-Milan-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amd-psfd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='no-nested-data-bp'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='null-sel-clr-base'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='stibp-always-on'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-Rome'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-Rome-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-Rome-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-Rome-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='EPYC-v4'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='GraniteRapids'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-fp16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-int8'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-tile'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-fp16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fbsdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrc'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fzrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='mcdt-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pbrsb-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='prefetchiti'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='psdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='serialize'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xfd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='GraniteRapids-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-fp16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-int8'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-tile'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-fp16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fbsdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrc'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fzrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='mcdt-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pbrsb-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='prefetchiti'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='psdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='serialize'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xfd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='GraniteRapids-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-fp16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-int8'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-tile'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx10'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx10-128'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx10-256'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx10-512'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-fp16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='cldemote'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fbsdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrc'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fzrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='mcdt-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdir64b'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdiri'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pbrsb-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='prefetchiti'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='psdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='serialize'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ss'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xfd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Haswell'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Haswell-IBRS'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Haswell-noTSX'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Haswell-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Haswell-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Haswell-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Haswell-v4'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Icelake-Server'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Icelake-Server-noTSX'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Icelake-Server-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Icelake-Server-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Icelake-Server-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Icelake-Server-v4'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Icelake-Server-v5'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Icelake-Server-v6'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Icelake-Server-v7'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='IvyBridge'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='IvyBridge-IBRS'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='IvyBridge-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='IvyBridge-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='KnightsMill'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-4fmaps'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-4vnniw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512er'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512pf'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ss'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='KnightsMill-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-4fmaps'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-4vnniw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512er'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512pf'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ss'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Opteron_G4'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fma4'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xop'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Opteron_G4-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fma4'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xop'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Opteron_G5'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fma4'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='tbm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xop'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Opteron_G5-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fma4'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='tbm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xop'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='SapphireRapids'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-int8'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-tile'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-fp16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrc'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fzrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='serialize'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xfd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='SapphireRapids-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-int8'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-tile'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-fp16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrc'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fzrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='serialize'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xfd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='SapphireRapids-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-int8'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-tile'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-fp16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fbsdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrc'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fzrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='psdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='serialize'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xfd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='SapphireRapids-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-int8'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='amx-tile'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-bf16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-fp16'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512-vpopcntdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bitalg'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vbmi2'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='cldemote'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fbsdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrc'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fzrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='la57'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdir64b'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdiri'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='psdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='serialize'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ss'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='taa-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='tsx-ldtrk'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xfd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='SierraForest'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-ne-convert'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni-int8'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='cmpccxadd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fbsdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='mcdt-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pbrsb-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='psdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='serialize'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='SierraForest-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-ifma'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-ne-convert'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx-vnni-int8'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='bus-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='cmpccxadd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fbsdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='fsrs'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ibrs-all'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='mcdt-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pbrsb-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='psdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='sbdr-ssdp-no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='serialize'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vaes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='vpclmulqdq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Client'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Client-IBRS'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Client-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Client-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Client-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Client-v4'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Server'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Server-IBRS'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Server-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Server-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='hle'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='rtm'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Server-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Server-v4'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Skylake-Server-v5'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512bw'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512cd'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512dq'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512f'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='avx512vl'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='invpcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pcid'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='pku'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Snowridge'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='cldemote'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='core-capability'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdir64b'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdiri'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='mpx'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='split-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Snowridge-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='cldemote'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='core-capability'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdir64b'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdiri'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='mpx'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='split-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Snowridge-v2'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='cldemote'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='core-capability'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdir64b'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdiri'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='split-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Snowridge-v3'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='cldemote'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='core-capability'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdir64b'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdiri'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='split-lock-detect'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='Snowridge-v4'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='cldemote'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='erms'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='gfni'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdir64b'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='movdiri'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='xsaves'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='athlon'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='3dnow'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='3dnowext'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='athlon-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='3dnow'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='3dnowext'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='core2duo'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ss'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='core2duo-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ss'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='coreduo'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ss'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='coreduo-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ss'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='n270'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ss'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='n270-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='ss'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='phenom'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='3dnow'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='3dnowext'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <blockers model='phenom-v1'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='3dnow'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <feature name='3dnowext'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </blockers>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </mode>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   </cpu>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <memoryBacking supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <enum name='sourceType'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <value>file</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <value>anonymous</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <value>memfd</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   </memoryBacking>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <devices>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <disk supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='diskDevice'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>disk</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>cdrom</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>floppy</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>lun</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='bus'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>ide</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>fdc</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>scsi</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>virtio</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>usb</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>sata</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='model'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>virtio</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>virtio-transitional</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>virtio-non-transitional</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </disk>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <graphics supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='type'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>vnc</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>egl-headless</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>dbus</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </graphics>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <video supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='modelType'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>vga</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>cirrus</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>virtio</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>none</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>bochs</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>ramfb</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </video>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <hostdev supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='mode'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>subsystem</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='startupPolicy'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>default</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>mandatory</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>requisite</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>optional</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='subsysType'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>usb</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>pci</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>scsi</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='capsType'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='pciBackend'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </hostdev>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <rng supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='model'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>virtio</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>virtio-transitional</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>virtio-non-transitional</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='backendModel'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>random</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>egd</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>builtin</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </rng>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <filesystem supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='driverType'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>path</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>handle</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>virtiofs</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </filesystem>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <tpm supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='model'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>tpm-tis</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>tpm-crb</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='backendModel'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>emulator</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>external</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='backendVersion'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>2.0</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </tpm>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <redirdev supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='bus'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>usb</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </redirdev>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <channel supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='type'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>pty</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>unix</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </channel>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <crypto supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='model'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='type'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>qemu</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='backendModel'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>builtin</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </crypto>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <interface supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='backendType'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>default</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>passt</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </interface>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <panic supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='model'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>isa</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>hyperv</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </panic>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <console supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='type'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>null</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>vc</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>pty</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>dev</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>file</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>pipe</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>stdio</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>udp</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>tcp</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>unix</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>qemu-vdagent</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>dbus</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </console>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   </devices>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   <features>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <gic supported='no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <vmcoreinfo supported='yes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <genid supported='yes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <backingStoreInput supported='yes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <backup supported='yes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <async-teardown supported='yes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <ps2 supported='yes'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <sev supported='no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <sgx supported='no'/>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <hyperv supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='features'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>relaxed</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>vapic</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>spinlocks</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>vpindex</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>runtime</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>synic</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>stimer</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>reset</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>vendor_id</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>frequencies</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>reenlightenment</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>tlbflush</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>ipi</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>avic</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>emsr_bitmap</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>xmm_input</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <defaults>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <spinlocks>4095</spinlocks>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <stimer_direct>on</stimer_direct>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <tlbflush_direct>on</tlbflush_direct>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <tlbflush_extended>on</tlbflush_extended>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </defaults>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </hyperv>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     <launchSecurity supported='yes'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       <enum name='sectype'>
Nov 24 18:43:08 compute-0 nova_compute[270693]:         <value>tdx</value>
Nov 24 18:43:08 compute-0 nova_compute[270693]:       </enum>
Nov 24 18:43:08 compute-0 nova_compute[270693]:     </launchSecurity>
Nov 24 18:43:08 compute-0 nova_compute[270693]:   </features>
Nov 24 18:43:08 compute-0 nova_compute[270693]: </domainCapabilities>
Nov 24 18:43:08 compute-0 nova_compute[270693]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.631 270697 DEBUG nova.virt.libvirt.host [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.631 270697 INFO nova.virt.libvirt.host [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Secure Boot support detected
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.633 270697 INFO nova.virt.libvirt.driver [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.633 270697 INFO nova.virt.libvirt.driver [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.641 270697 DEBUG nova.virt.libvirt.driver [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.672 270697 INFO nova.virt.node [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Determined node identity d1cce7ec-de83-4810-91f8-1852891da8a6 from /var/lib/nova/compute_id
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.696 270697 WARNING nova.compute.manager [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Compute nodes ['d1cce7ec-de83-4810-91f8-1852891da8a6'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.738 270697 INFO nova.compute.manager [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.790 270697 WARNING nova.compute.manager [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.790 270697 DEBUG oslo_concurrency.lockutils [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.790 270697 DEBUG oslo_concurrency.lockutils [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.790 270697 DEBUG oslo_concurrency.lockutils [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.790 270697 DEBUG nova.compute.resource_tracker [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 18:43:08 compute-0 nova_compute[270693]: 2025-11-24 18:43:08.791 270697 DEBUG oslo_concurrency.processutils [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:43:08 compute-0 ceph-mon[74927]: pgmap v817: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 18:43:09 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3802188247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:43:09 compute-0 nova_compute[270693]: 2025-11-24 18:43:09.190 270697 DEBUG oslo_concurrency.processutils [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:43:09 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Nov 24 18:43:09 compute-0 systemd[1]: Started libvirt nodedev daemon.
Nov 24 18:43:09 compute-0 nova_compute[270693]: 2025-11-24 18:43:09.452 270697 WARNING nova.virt.libvirt.driver [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 18:43:09 compute-0 nova_compute[270693]: 2025-11-24 18:43:09.453 270697 DEBUG nova.compute.resource_tracker [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5145MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 18:43:09 compute-0 nova_compute[270693]: 2025-11-24 18:43:09.454 270697 DEBUG oslo_concurrency.lockutils [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:43:09 compute-0 nova_compute[270693]: 2025-11-24 18:43:09.454 270697 DEBUG oslo_concurrency.lockutils [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:43:09 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v818: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:09 compute-0 nova_compute[270693]: 2025-11-24 18:43:09.485 270697 WARNING nova.compute.resource_tracker [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] No compute node record for compute-0.ctlplane.example.com:d1cce7ec-de83-4810-91f8-1852891da8a6: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host d1cce7ec-de83-4810-91f8-1852891da8a6 could not be found.
Nov 24 18:43:09 compute-0 nova_compute[270693]: 2025-11-24 18:43:09.510 270697 INFO nova.compute.resource_tracker [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: d1cce7ec-de83-4810-91f8-1852891da8a6
Nov 24 18:43:09 compute-0 nova_compute[270693]: 2025-11-24 18:43:09.598 270697 DEBUG nova.compute.resource_tracker [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 18:43:09 compute-0 nova_compute[270693]: 2025-11-24 18:43:09.598 270697 DEBUG nova.compute.resource_tracker [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 18:43:09 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3802188247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:43:10 compute-0 nova_compute[270693]: 2025-11-24 18:43:10.716 270697 INFO nova.scheduler.client.report [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] [req-d063727b-b741-4d91-984e-65ebae7920b5] Created resource provider record via placement API for resource provider with UUID d1cce7ec-de83-4810-91f8-1852891da8a6 and name compute-0.ctlplane.example.com.
Nov 24 18:43:10 compute-0 ceph-mon[74927]: pgmap v818: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:11 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v819: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:11 compute-0 nova_compute[270693]: 2025-11-24 18:43:11.560 270697 DEBUG oslo_concurrency.processutils [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:43:11 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 18:43:11 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1069805352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:43:11 compute-0 nova_compute[270693]: 2025-11-24 18:43:11.966 270697 DEBUG oslo_concurrency.processutils [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:43:11 compute-0 nova_compute[270693]: 2025-11-24 18:43:11.972 270697 DEBUG nova.virt.libvirt.host [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 24 18:43:11 compute-0 nova_compute[270693]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Nov 24 18:43:11 compute-0 nova_compute[270693]: 2025-11-24 18:43:11.973 270697 INFO nova.virt.libvirt.host [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] kernel doesn't support AMD SEV
Nov 24 18:43:11 compute-0 nova_compute[270693]: 2025-11-24 18:43:11.974 270697 DEBUG nova.compute.provider_tree [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Updating inventory in ProviderTree for provider d1cce7ec-de83-4810-91f8-1852891da8a6 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 18:43:11 compute-0 nova_compute[270693]: 2025-11-24 18:43:11.975 270697 DEBUG nova.virt.libvirt.driver [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 18:43:12 compute-0 nova_compute[270693]: 2025-11-24 18:43:12.041 270697 DEBUG nova.scheduler.client.report [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Updated inventory for provider d1cce7ec-de83-4810-91f8-1852891da8a6 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Nov 24 18:43:12 compute-0 nova_compute[270693]: 2025-11-24 18:43:12.042 270697 DEBUG nova.compute.provider_tree [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Updating resource provider d1cce7ec-de83-4810-91f8-1852891da8a6 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 24 18:43:12 compute-0 nova_compute[270693]: 2025-11-24 18:43:12.042 270697 DEBUG nova.compute.provider_tree [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Updating inventory in ProviderTree for provider d1cce7ec-de83-4810-91f8-1852891da8a6 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 18:43:12 compute-0 nova_compute[270693]: 2025-11-24 18:43:12.143 270697 DEBUG nova.compute.provider_tree [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Updating resource provider d1cce7ec-de83-4810-91f8-1852891da8a6 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 24 18:43:12 compute-0 nova_compute[270693]: 2025-11-24 18:43:12.169 270697 DEBUG nova.compute.resource_tracker [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 18:43:12 compute-0 nova_compute[270693]: 2025-11-24 18:43:12.169 270697 DEBUG oslo_concurrency.lockutils [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:43:12 compute-0 nova_compute[270693]: 2025-11-24 18:43:12.169 270697 DEBUG nova.service [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Nov 24 18:43:12 compute-0 nova_compute[270693]: 2025-11-24 18:43:12.281 270697 DEBUG nova.service [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Nov 24 18:43:12 compute-0 nova_compute[270693]: 2025-11-24 18:43:12.282 270697 DEBUG nova.servicegroup.drivers.db [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Nov 24 18:43:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:43:12 compute-0 ceph-mon[74927]: pgmap v819: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:12 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1069805352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:43:13 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v820: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:14 compute-0 ceph-mon[74927]: pgmap v820: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:15 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v821: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:16 compute-0 ceph-mon[74927]: pgmap v821: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:17 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v822: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:43:18 compute-0 ceph-mon[74927]: pgmap v822: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:19 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v823: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:20 compute-0 ceph-mon[74927]: pgmap v823: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:21 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v824: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:22 compute-0 sudo[271060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:43:22 compute-0 sudo[271060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:43:22 compute-0 sudo[271060]: pam_unix(sudo:session): session closed for user root
Nov 24 18:43:22 compute-0 sudo[271085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:43:22 compute-0 sudo[271085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:43:22 compute-0 sudo[271085]: pam_unix(sudo:session): session closed for user root
Nov 24 18:43:22 compute-0 sudo[271110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:43:22 compute-0 sudo[271110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:43:22 compute-0 sudo[271110]: pam_unix(sudo:session): session closed for user root
Nov 24 18:43:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:43:22.735 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:43:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:43:22.736 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:43:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:43:22.736 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:43:22 compute-0 sudo[271135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 18:43:22 compute-0 sudo[271135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:43:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:43:22 compute-0 ceph-mon[74927]: pgmap v824: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:23 compute-0 sudo[271135]: pam_unix(sudo:session): session closed for user root
Nov 24 18:43:23 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:43:23 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:43:23 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:43:23 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:43:23 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:43:23 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:43:23 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 2bb85ddb-2e73-4fb6-8afd-263fae2242a0 does not exist
Nov 24 18:43:23 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 033fa4f9-854a-45a2-be67-93c20c2a0cef does not exist
Nov 24 18:43:23 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 1606bbfd-12c6-4ea0-a477-78776478a4d1 does not exist
Nov 24 18:43:23 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 18:43:23 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:43:23 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 18:43:23 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:43:23 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:43:23 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:43:23 compute-0 sudo[271191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:43:23 compute-0 sudo[271191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:43:23 compute-0 sudo[271191]: pam_unix(sudo:session): session closed for user root
Nov 24 18:43:23 compute-0 sudo[271216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:43:23 compute-0 sudo[271216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:43:23 compute-0 sudo[271216]: pam_unix(sudo:session): session closed for user root
Nov 24 18:43:23 compute-0 sudo[271241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:43:23 compute-0 sudo[271241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:43:23 compute-0 sudo[271241]: pam_unix(sudo:session): session closed for user root
Nov 24 18:43:23 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v825: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:23 compute-0 sudo[271266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 18:43:23 compute-0 sudo[271266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:43:23 compute-0 podman[271333]: 2025-11-24 18:43:23.828619577 +0000 UTC m=+0.037309128 container create eae5eb953d6d1c2a8a453912c2b1f52c1d14feca8fcbd65a7f57bc7e7756fd9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:43:23 compute-0 systemd[1]: Started libpod-conmon-eae5eb953d6d1c2a8a453912c2b1f52c1d14feca8fcbd65a7f57bc7e7756fd9e.scope.
Nov 24 18:43:23 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:43:23 compute-0 podman[271333]: 2025-11-24 18:43:23.812136187 +0000 UTC m=+0.020825748 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:43:23 compute-0 podman[271333]: 2025-11-24 18:43:23.907596962 +0000 UTC m=+0.116286503 container init eae5eb953d6d1c2a8a453912c2b1f52c1d14feca8fcbd65a7f57bc7e7756fd9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mirzakhani, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 18:43:23 compute-0 podman[271333]: 2025-11-24 18:43:23.921084247 +0000 UTC m=+0.129773788 container start eae5eb953d6d1c2a8a453912c2b1f52c1d14feca8fcbd65a7f57bc7e7756fd9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 24 18:43:23 compute-0 podman[271333]: 2025-11-24 18:43:23.924555233 +0000 UTC m=+0.133244764 container attach eae5eb953d6d1c2a8a453912c2b1f52c1d14feca8fcbd65a7f57bc7e7756fd9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:43:23 compute-0 amazing_mirzakhani[271349]: 167 167
Nov 24 18:43:23 compute-0 systemd[1]: libpod-eae5eb953d6d1c2a8a453912c2b1f52c1d14feca8fcbd65a7f57bc7e7756fd9e.scope: Deactivated successfully.
Nov 24 18:43:23 compute-0 podman[271333]: 2025-11-24 18:43:23.930578013 +0000 UTC m=+0.139267554 container died eae5eb953d6d1c2a8a453912c2b1f52c1d14feca8fcbd65a7f57bc7e7756fd9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mirzakhani, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 18:43:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-c126f296c1763689b41f82d91a052fe0037983a64e804944cae46df09f64b79a-merged.mount: Deactivated successfully.
Nov 24 18:43:23 compute-0 podman[271333]: 2025-11-24 18:43:23.974559577 +0000 UTC m=+0.183249118 container remove eae5eb953d6d1c2a8a453912c2b1f52c1d14feca8fcbd65a7f57bc7e7756fd9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:43:23 compute-0 systemd[1]: libpod-conmon-eae5eb953d6d1c2a8a453912c2b1f52c1d14feca8fcbd65a7f57bc7e7756fd9e.scope: Deactivated successfully.
Nov 24 18:43:23 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:43:23 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:43:23 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:43:23 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:43:23 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:43:23 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:43:24 compute-0 podman[271361]: 2025-11-24 18:43:24.10375863 +0000 UTC m=+0.112510789 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 24 18:43:24 compute-0 podman[271397]: 2025-11-24 18:43:24.189032691 +0000 UTC m=+0.069926290 container create 4496f01bdd7bae9ec022b71c24dd36619fa46a4c498e6c97377b4960c220293d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brattain, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:43:24 compute-0 systemd[1]: Started libpod-conmon-4496f01bdd7bae9ec022b71c24dd36619fa46a4c498e6c97377b4960c220293d.scope.
Nov 24 18:43:24 compute-0 podman[271397]: 2025-11-24 18:43:24.16286309 +0000 UTC m=+0.043756769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:43:24 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:43:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf158c4f09de8e2571eefc6e6102919afffde520b16e185a98ed527017257364/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:43:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf158c4f09de8e2571eefc6e6102919afffde520b16e185a98ed527017257364/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:43:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf158c4f09de8e2571eefc6e6102919afffde520b16e185a98ed527017257364/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:43:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf158c4f09de8e2571eefc6e6102919afffde520b16e185a98ed527017257364/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:43:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf158c4f09de8e2571eefc6e6102919afffde520b16e185a98ed527017257364/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:43:24 compute-0 podman[271397]: 2025-11-24 18:43:24.297368855 +0000 UTC m=+0.178262494 container init 4496f01bdd7bae9ec022b71c24dd36619fa46a4c498e6c97377b4960c220293d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brattain, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Nov 24 18:43:24 compute-0 podman[271397]: 2025-11-24 18:43:24.318758987 +0000 UTC m=+0.199652616 container start 4496f01bdd7bae9ec022b71c24dd36619fa46a4c498e6c97377b4960c220293d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brattain, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:43:24 compute-0 podman[271397]: 2025-11-24 18:43:24.323602998 +0000 UTC m=+0.204496617 container attach 4496f01bdd7bae9ec022b71c24dd36619fa46a4c498e6c97377b4960c220293d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brattain, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:43:25 compute-0 ceph-mon[74927]: pgmap v825: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:25 compute-0 charming_brattain[271413]: --> passed data devices: 0 physical, 3 LVM
Nov 24 18:43:25 compute-0 charming_brattain[271413]: --> relative data size: 1.0
Nov 24 18:43:25 compute-0 charming_brattain[271413]: --> All data devices are unavailable
Nov 24 18:43:25 compute-0 systemd[1]: libpod-4496f01bdd7bae9ec022b71c24dd36619fa46a4c498e6c97377b4960c220293d.scope: Deactivated successfully.
Nov 24 18:43:25 compute-0 systemd[1]: libpod-4496f01bdd7bae9ec022b71c24dd36619fa46a4c498e6c97377b4960c220293d.scope: Consumed 1.057s CPU time.
Nov 24 18:43:25 compute-0 podman[271397]: 2025-11-24 18:43:25.420650601 +0000 UTC m=+1.301544220 container died 4496f01bdd7bae9ec022b71c24dd36619fa46a4c498e6c97377b4960c220293d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:43:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf158c4f09de8e2571eefc6e6102919afffde520b16e185a98ed527017257364-merged.mount: Deactivated successfully.
Nov 24 18:43:25 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v826: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:25 compute-0 podman[271397]: 2025-11-24 18:43:25.474826168 +0000 UTC m=+1.355719747 container remove 4496f01bdd7bae9ec022b71c24dd36619fa46a4c498e6c97377b4960c220293d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:43:25 compute-0 systemd[1]: libpod-conmon-4496f01bdd7bae9ec022b71c24dd36619fa46a4c498e6c97377b4960c220293d.scope: Deactivated successfully.
Nov 24 18:43:25 compute-0 sudo[271266]: pam_unix(sudo:session): session closed for user root
Nov 24 18:43:25 compute-0 sudo[271453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:43:25 compute-0 sudo[271453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:43:25 compute-0 sudo[271453]: pam_unix(sudo:session): session closed for user root
Nov 24 18:43:25 compute-0 sudo[271478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:43:25 compute-0 sudo[271478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:43:25 compute-0 sudo[271478]: pam_unix(sudo:session): session closed for user root
Nov 24 18:43:25 compute-0 sudo[271503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:43:25 compute-0 sudo[271503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:43:25 compute-0 sudo[271503]: pam_unix(sudo:session): session closed for user root
Nov 24 18:43:25 compute-0 sudo[271528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- lvm list --format json
Nov 24 18:43:25 compute-0 sudo[271528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:43:26 compute-0 podman[271595]: 2025-11-24 18:43:26.042039465 +0000 UTC m=+0.044112798 container create 03ffb6db578ab06e43c09626a4fcf536941952eaf244f0b56f32280e0b961ad0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 24 18:43:26 compute-0 systemd[1]: Started libpod-conmon-03ffb6db578ab06e43c09626a4fcf536941952eaf244f0b56f32280e0b961ad0.scope.
Nov 24 18:43:26 compute-0 podman[271595]: 2025-11-24 18:43:26.02414616 +0000 UTC m=+0.026219493 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:43:26 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:43:26 compute-0 podman[271595]: 2025-11-24 18:43:26.138173956 +0000 UTC m=+0.140247339 container init 03ffb6db578ab06e43c09626a4fcf536941952eaf244f0b56f32280e0b961ad0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_yalow, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 18:43:26 compute-0 podman[271595]: 2025-11-24 18:43:26.14680503 +0000 UTC m=+0.148878343 container start 03ffb6db578ab06e43c09626a4fcf536941952eaf244f0b56f32280e0b961ad0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:43:26 compute-0 podman[271595]: 2025-11-24 18:43:26.150616345 +0000 UTC m=+0.152689728 container attach 03ffb6db578ab06e43c09626a4fcf536941952eaf244f0b56f32280e0b961ad0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 24 18:43:26 compute-0 gracious_yalow[271623]: 167 167
Nov 24 18:43:26 compute-0 systemd[1]: libpod-03ffb6db578ab06e43c09626a4fcf536941952eaf244f0b56f32280e0b961ad0.scope: Deactivated successfully.
Nov 24 18:43:26 compute-0 podman[271595]: 2025-11-24 18:43:26.153939918 +0000 UTC m=+0.156013271 container died 03ffb6db578ab06e43c09626a4fcf536941952eaf244f0b56f32280e0b961ad0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Nov 24 18:43:26 compute-0 podman[271609]: 2025-11-24 18:43:26.163330701 +0000 UTC m=+0.067659313 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Nov 24 18:43:26 compute-0 podman[271612]: 2025-11-24 18:43:26.165987567 +0000 UTC m=+0.069338325 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 24 18:43:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-b57569a0cf45b2bec6f0f9976dc371d98c5c8a949885250b04d31b4d02792205-merged.mount: Deactivated successfully.
Nov 24 18:43:26 compute-0 podman[271595]: 2025-11-24 18:43:26.192741313 +0000 UTC m=+0.194814616 container remove 03ffb6db578ab06e43c09626a4fcf536941952eaf244f0b56f32280e0b961ad0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_yalow, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 18:43:26 compute-0 systemd[1]: libpod-conmon-03ffb6db578ab06e43c09626a4fcf536941952eaf244f0b56f32280e0b961ad0.scope: Deactivated successfully.
Nov 24 18:43:26 compute-0 podman[271672]: 2025-11-24 18:43:26.359266084 +0000 UTC m=+0.034510619 container create dec15d138efb8eb0268ea81012dfb576139e59a80422ed75d28a7d3d984a4c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_jackson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 18:43:26 compute-0 systemd[1]: Started libpod-conmon-dec15d138efb8eb0268ea81012dfb576139e59a80422ed75d28a7d3d984a4c19.scope.
Nov 24 18:43:26 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:43:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0057a89d23385c0c4f3f4e10c3ea695a75e5681609fb8d5be8ef0b5ab25eafa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:43:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0057a89d23385c0c4f3f4e10c3ea695a75e5681609fb8d5be8ef0b5ab25eafa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:43:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0057a89d23385c0c4f3f4e10c3ea695a75e5681609fb8d5be8ef0b5ab25eafa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:43:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0057a89d23385c0c4f3f4e10c3ea695a75e5681609fb8d5be8ef0b5ab25eafa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:43:26 compute-0 podman[271672]: 2025-11-24 18:43:26.344695892 +0000 UTC m=+0.019940457 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:43:26 compute-0 podman[271672]: 2025-11-24 18:43:26.441334405 +0000 UTC m=+0.116578940 container init dec15d138efb8eb0268ea81012dfb576139e59a80422ed75d28a7d3d984a4c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_jackson, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:43:26 compute-0 podman[271672]: 2025-11-24 18:43:26.450312389 +0000 UTC m=+0.125556944 container start dec15d138efb8eb0268ea81012dfb576139e59a80422ed75d28a7d3d984a4c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 24 18:43:26 compute-0 podman[271672]: 2025-11-24 18:43:26.45518836 +0000 UTC m=+0.130432905 container attach dec15d138efb8eb0268ea81012dfb576139e59a80422ed75d28a7d3d984a4c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_jackson, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:43:27 compute-0 frosty_jackson[271688]: {
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:     "0": [
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:         {
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:             "devices": [
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:                 "/dev/loop3"
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:             ],
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:             "lv_name": "ceph_lv0",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:             "lv_size": "21470642176",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:             "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:             "name": "ceph_lv0",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:             "tags": {
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:                 "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:                 "ceph.cluster_name": "ceph",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:                 "ceph.crush_device_class": "",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:                 "ceph.encrypted": "0",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:                 "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:                 "ceph.osd_id": "0",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:                 "ceph.type": "block",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:                 "ceph.vdo": "0"
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:             },
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:             "type": "block",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:             "vg_name": "ceph_vg0"
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:         }
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:     ],
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:     "1": [
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:         {
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:             "devices": [
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:                 "/dev/loop4"
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:             ],
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:             "lv_name": "ceph_lv1",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:             "lv_size": "21470642176",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:             "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:             "name": "ceph_lv1",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:             "tags": {
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:                 "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:                 "ceph.cluster_name": "ceph",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:                 "ceph.crush_device_class": "",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:                 "ceph.encrypted": "0",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:                 "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:                 "ceph.osd_id": "1",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:                 "ceph.type": "block",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:                 "ceph.vdo": "0"
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:             },
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:             "type": "block",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:             "vg_name": "ceph_vg1"
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:         }
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:     ],
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:     "2": [
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:         {
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:             "devices": [
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:                 "/dev/loop5"
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:             ],
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:             "lv_name": "ceph_lv2",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:             "lv_size": "21470642176",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:             "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:             "name": "ceph_lv2",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:             "tags": {
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:                 "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:                 "ceph.cluster_name": "ceph",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:                 "ceph.crush_device_class": "",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:                 "ceph.encrypted": "0",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:                 "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:                 "ceph.osd_id": "2",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:                 "ceph.type": "block",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:                 "ceph.vdo": "0"
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:             },
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:             "type": "block",
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:             "vg_name": "ceph_vg2"
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:         }
Nov 24 18:43:27 compute-0 frosty_jackson[271688]:     ]
Nov 24 18:43:27 compute-0 frosty_jackson[271688]: }
Nov 24 18:43:27 compute-0 systemd[1]: libpod-dec15d138efb8eb0268ea81012dfb576139e59a80422ed75d28a7d3d984a4c19.scope: Deactivated successfully.
Nov 24 18:43:27 compute-0 podman[271672]: 2025-11-24 18:43:27.202392233 +0000 UTC m=+0.877636778 container died dec15d138efb8eb0268ea81012dfb576139e59a80422ed75d28a7d3d984a4c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 18:43:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0057a89d23385c0c4f3f4e10c3ea695a75e5681609fb8d5be8ef0b5ab25eafa-merged.mount: Deactivated successfully.
Nov 24 18:43:27 compute-0 podman[271672]: 2025-11-24 18:43:27.255145795 +0000 UTC m=+0.930390330 container remove dec15d138efb8eb0268ea81012dfb576139e59a80422ed75d28a7d3d984a4c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:43:27 compute-0 systemd[1]: libpod-conmon-dec15d138efb8eb0268ea81012dfb576139e59a80422ed75d28a7d3d984a4c19.scope: Deactivated successfully.
Nov 24 18:43:27 compute-0 sudo[271528]: pam_unix(sudo:session): session closed for user root
Nov 24 18:43:27 compute-0 ceph-mon[74927]: pgmap v826: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:27 compute-0 sudo[271708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:43:27 compute-0 sudo[271708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:43:27 compute-0 sudo[271708]: pam_unix(sudo:session): session closed for user root
Nov 24 18:43:27 compute-0 sudo[271733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:43:27 compute-0 sudo[271733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:43:27 compute-0 sudo[271733]: pam_unix(sudo:session): session closed for user root
Nov 24 18:43:27 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v827: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:27 compute-0 sudo[271758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:43:27 compute-0 sudo[271758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:43:27 compute-0 sudo[271758]: pam_unix(sudo:session): session closed for user root
Nov 24 18:43:27 compute-0 sudo[271783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- raw list --format json
Nov 24 18:43:27 compute-0 sudo[271783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:43:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:43:27 compute-0 podman[271848]: 2025-11-24 18:43:27.880427816 +0000 UTC m=+0.039954025 container create 6d465888e870cfa0f2916674016ccab9b07f7fb44219c88c6a9e0fd90196546b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mahavira, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 18:43:27 compute-0 systemd[1]: Started libpod-conmon-6d465888e870cfa0f2916674016ccab9b07f7fb44219c88c6a9e0fd90196546b.scope.
Nov 24 18:43:27 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:43:27 compute-0 podman[271848]: 2025-11-24 18:43:27.866211432 +0000 UTC m=+0.025737671 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:43:27 compute-0 podman[271848]: 2025-11-24 18:43:27.97347722 +0000 UTC m=+0.133003479 container init 6d465888e870cfa0f2916674016ccab9b07f7fb44219c88c6a9e0fd90196546b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 18:43:27 compute-0 podman[271848]: 2025-11-24 18:43:27.97870635 +0000 UTC m=+0.138232559 container start 6d465888e870cfa0f2916674016ccab9b07f7fb44219c88c6a9e0fd90196546b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mahavira, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 24 18:43:27 compute-0 inspiring_mahavira[271864]: 167 167
Nov 24 18:43:27 compute-0 systemd[1]: libpod-6d465888e870cfa0f2916674016ccab9b07f7fb44219c88c6a9e0fd90196546b.scope: Deactivated successfully.
Nov 24 18:43:27 compute-0 conmon[271864]: conmon 6d465888e870cfa0f291 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6d465888e870cfa0f2916674016ccab9b07f7fb44219c88c6a9e0fd90196546b.scope/container/memory.events
Nov 24 18:43:27 compute-0 podman[271848]: 2025-11-24 18:43:27.984784501 +0000 UTC m=+0.144310720 container attach 6d465888e870cfa0f2916674016ccab9b07f7fb44219c88c6a9e0fd90196546b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:43:27 compute-0 podman[271848]: 2025-11-24 18:43:27.98513078 +0000 UTC m=+0.144656999 container died 6d465888e870cfa0f2916674016ccab9b07f7fb44219c88c6a9e0fd90196546b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:43:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-1edaefc83fb1b60e79734e83ecc5eace055c21f6446ce5c7365c0e170e35dc08-merged.mount: Deactivated successfully.
Nov 24 18:43:28 compute-0 podman[271848]: 2025-11-24 18:43:28.0253458 +0000 UTC m=+0.184872019 container remove 6d465888e870cfa0f2916674016ccab9b07f7fb44219c88c6a9e0fd90196546b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mahavira, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:43:28 compute-0 systemd[1]: libpod-conmon-6d465888e870cfa0f2916674016ccab9b07f7fb44219c88c6a9e0fd90196546b.scope: Deactivated successfully.
Nov 24 18:43:28 compute-0 podman[271887]: 2025-11-24 18:43:28.176413517 +0000 UTC m=+0.039445352 container create d91977f5502a3dccf339d43c185c50a5b936cf816a2cc5fd27261acb76902792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 24 18:43:28 compute-0 systemd[1]: Started libpod-conmon-d91977f5502a3dccf339d43c185c50a5b936cf816a2cc5fd27261acb76902792.scope.
Nov 24 18:43:28 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:43:28 compute-0 podman[271887]: 2025-11-24 18:43:28.160132422 +0000 UTC m=+0.023164347 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:43:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5167ee9bd6866ba02a63b91664012200d2a832b7e8a2f52d56e465b5b2ea468d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:43:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5167ee9bd6866ba02a63b91664012200d2a832b7e8a2f52d56e465b5b2ea468d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:43:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5167ee9bd6866ba02a63b91664012200d2a832b7e8a2f52d56e465b5b2ea468d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:43:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5167ee9bd6866ba02a63b91664012200d2a832b7e8a2f52d56e465b5b2ea468d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:43:28 compute-0 podman[271887]: 2025-11-24 18:43:28.266223001 +0000 UTC m=+0.129254856 container init d91977f5502a3dccf339d43c185c50a5b936cf816a2cc5fd27261acb76902792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:43:28 compute-0 podman[271887]: 2025-11-24 18:43:28.275823509 +0000 UTC m=+0.138855334 container start d91977f5502a3dccf339d43c185c50a5b936cf816a2cc5fd27261acb76902792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 24 18:43:28 compute-0 podman[271887]: 2025-11-24 18:43:28.278530037 +0000 UTC m=+0.141561882 container attach d91977f5502a3dccf339d43c185c50a5b936cf816a2cc5fd27261acb76902792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 18:43:29 compute-0 epic_satoshi[271903]: {
Nov 24 18:43:29 compute-0 epic_satoshi[271903]:     "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 18:43:29 compute-0 epic_satoshi[271903]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:43:29 compute-0 epic_satoshi[271903]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 18:43:29 compute-0 epic_satoshi[271903]:         "osd_id": 0,
Nov 24 18:43:29 compute-0 epic_satoshi[271903]:         "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:43:29 compute-0 epic_satoshi[271903]:         "type": "bluestore"
Nov 24 18:43:29 compute-0 epic_satoshi[271903]:     },
Nov 24 18:43:29 compute-0 epic_satoshi[271903]:     "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 18:43:29 compute-0 epic_satoshi[271903]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:43:29 compute-0 epic_satoshi[271903]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 18:43:29 compute-0 epic_satoshi[271903]:         "osd_id": 1,
Nov 24 18:43:29 compute-0 epic_satoshi[271903]:         "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:43:29 compute-0 epic_satoshi[271903]:         "type": "bluestore"
Nov 24 18:43:29 compute-0 epic_satoshi[271903]:     },
Nov 24 18:43:29 compute-0 epic_satoshi[271903]:     "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 18:43:29 compute-0 epic_satoshi[271903]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:43:29 compute-0 epic_satoshi[271903]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 18:43:29 compute-0 epic_satoshi[271903]:         "osd_id": 2,
Nov 24 18:43:29 compute-0 epic_satoshi[271903]:         "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:43:29 compute-0 epic_satoshi[271903]:         "type": "bluestore"
Nov 24 18:43:29 compute-0 epic_satoshi[271903]:     }
Nov 24 18:43:29 compute-0 epic_satoshi[271903]: }
Nov 24 18:43:29 compute-0 systemd[1]: libpod-d91977f5502a3dccf339d43c185c50a5b936cf816a2cc5fd27261acb76902792.scope: Deactivated successfully.
Nov 24 18:43:29 compute-0 podman[271887]: 2025-11-24 18:43:29.257690788 +0000 UTC m=+1.120722623 container died d91977f5502a3dccf339d43c185c50a5b936cf816a2cc5fd27261acb76902792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_satoshi, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 18:43:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-5167ee9bd6866ba02a63b91664012200d2a832b7e8a2f52d56e465b5b2ea468d-merged.mount: Deactivated successfully.
Nov 24 18:43:29 compute-0 ceph-mon[74927]: pgmap v827: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:29 compute-0 podman[271887]: 2025-11-24 18:43:29.325149445 +0000 UTC m=+1.188181280 container remove d91977f5502a3dccf339d43c185c50a5b936cf816a2cc5fd27261acb76902792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Nov 24 18:43:29 compute-0 systemd[1]: libpod-conmon-d91977f5502a3dccf339d43c185c50a5b936cf816a2cc5fd27261acb76902792.scope: Deactivated successfully.
Nov 24 18:43:29 compute-0 sudo[271783]: pam_unix(sudo:session): session closed for user root
Nov 24 18:43:29 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:43:29 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:43:29 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:43:29 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:43:29 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 0d916dc8-5ec3-4427-adc3-1f48d8a78d67 does not exist
Nov 24 18:43:29 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 022edc19-dfeb-4192-a1ed-7b5d7446ca13 does not exist
Nov 24 18:43:29 compute-0 sudo[271951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:43:29 compute-0 sudo[271951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:43:29 compute-0 sudo[271951]: pam_unix(sudo:session): session closed for user root
Nov 24 18:43:29 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v828: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:29 compute-0 sudo[271976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:43:29 compute-0 sudo[271976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:43:29 compute-0 sudo[271976]: pam_unix(sudo:session): session closed for user root
Nov 24 18:43:30 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:43:30 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:43:30 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 18:43:30 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4022853737' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 18:43:30 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 18:43:30 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4022853737' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 18:43:30 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 18:43:30 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1414935411' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 18:43:30 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 18:43:30 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1414935411' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 18:43:30 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 18:43:30 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2614245368' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 18:43:30 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 18:43:30 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2614245368' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 18:43:31 compute-0 ceph-mon[74927]: pgmap v828: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:31 compute-0 ceph-mon[74927]: from='client.? 192.168.122.10:0/4022853737' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 18:43:31 compute-0 ceph-mon[74927]: from='client.? 192.168.122.10:0/4022853737' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 18:43:31 compute-0 ceph-mon[74927]: from='client.? 192.168.122.10:0/1414935411' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 18:43:31 compute-0 ceph-mon[74927]: from='client.? 192.168.122.10:0/1414935411' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 18:43:31 compute-0 ceph-mon[74927]: from='client.? 192.168.122.10:0/2614245368' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 18:43:31 compute-0 ceph-mon[74927]: from='client.? 192.168.122.10:0/2614245368' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 18:43:31 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v829: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:32 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:43:33 compute-0 ceph-mon[74927]: pgmap v829: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:33 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v830: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:34 compute-0 ceph-mon[74927]: pgmap v830: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:43:34
Nov 24 18:43:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:43:34 compute-0 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 18:43:34 compute-0 ceph-mgr[75218]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'vms', '.rgw.root', '.mgr', 'backups', 'cephfs.cephfs.meta', 'volumes', 'images', 'default.rgw.control', 'default.rgw.log']
Nov 24 18:43:34 compute-0 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 18:43:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:43:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:43:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:43:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:43:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:43:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:43:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:43:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:43:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:43:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:43:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:43:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:43:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:43:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:43:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:43:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:43:35 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v831: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:36 compute-0 ceph-mon[74927]: pgmap v831: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:37 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v832: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:43:38 compute-0 ceph-mon[74927]: pgmap v832: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:39 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v833: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:40 compute-0 ceph-mon[74927]: pgmap v833: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:41 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v834: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:42 compute-0 ceph-mon[74927]: pgmap v834: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:43:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:43:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:43:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:43:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:43:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:43:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:43:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:43:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:43:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:43:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:43:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:43:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:43:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 18:43:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:43:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:43:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:43:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 18:43:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:43:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 18:43:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:43:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:43:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:43:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 18:43:43 compute-0 nova_compute[270693]: 2025-11-24 18:43:43.284 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:43:43 compute-0 nova_compute[270693]: 2025-11-24 18:43:43.309 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:43:43 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v835: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:44 compute-0 ceph-mon[74927]: pgmap v835: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:45 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v836: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:46 compute-0 ceph-mon[74927]: pgmap v836: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:47 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v837: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:43:48 compute-0 ceph-mon[74927]: pgmap v837: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:49 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v838: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:50 compute-0 ceph-mon[74927]: pgmap v838: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:51 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v839: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:52 compute-0 ceph-mon[74927]: pgmap v839: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:43:53 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v840: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:54 compute-0 ceph-mon[74927]: pgmap v840: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:55 compute-0 podman[272001]: 2025-11-24 18:43:55.012323968 +0000 UTC m=+0.098349607 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 24 18:43:55 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v841: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:56 compute-0 ceph-mon[74927]: pgmap v841: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:56 compute-0 podman[272029]: 2025-11-24 18:43:56.963604056 +0000 UTC m=+0.053408279 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:43:56 compute-0 podman[272030]: 2025-11-24 18:43:56.967497123 +0000 UTC m=+0.053939303 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, container_name=multipathd)
Nov 24 18:43:57 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v842: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:43:58 compute-0 ceph-mon[74927]: pgmap v842: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:43:59 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v843: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:00 compute-0 ceph-mon[74927]: pgmap v843: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:01 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v844: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:02 compute-0 ceph-mon[74927]: pgmap v844: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:44:03 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v845: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:04 compute-0 ceph-mon[74927]: pgmap v845: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:44:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:44:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:44:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:44:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:44:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:44:05 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v846: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:05 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Nov 24 18:44:05 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4141697146' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 24 18:44:05 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14349 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 24 18:44:05 compute-0 ceph-mgr[75218]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 24 18:44:05 compute-0 ceph-mgr[75218]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 24 18:44:06 compute-0 rsyslogd[1008]: imjournal from <np0005533938:ceph-mon>: begin to drop messages due to rate-limiting
Nov 24 18:44:06 compute-0 ceph-mon[74927]: pgmap v846: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:06 compute-0 ceph-mon[74927]: from='client.? 192.168.122.10:0/4141697146' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 24 18:44:06 compute-0 ceph-mon[74927]: from='client.14349 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 24 18:44:07 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v847: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:07 compute-0 nova_compute[270693]: 2025-11-24 18:44:07.531 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:44:07 compute-0 nova_compute[270693]: 2025-11-24 18:44:07.532 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:44:07 compute-0 nova_compute[270693]: 2025-11-24 18:44:07.533 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 18:44:07 compute-0 nova_compute[270693]: 2025-11-24 18:44:07.533 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 18:44:07 compute-0 nova_compute[270693]: 2025-11-24 18:44:07.550 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 18:44:07 compute-0 nova_compute[270693]: 2025-11-24 18:44:07.550 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:44:07 compute-0 nova_compute[270693]: 2025-11-24 18:44:07.551 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:44:07 compute-0 nova_compute[270693]: 2025-11-24 18:44:07.552 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:44:07 compute-0 nova_compute[270693]: 2025-11-24 18:44:07.552 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:44:07 compute-0 nova_compute[270693]: 2025-11-24 18:44:07.553 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:44:07 compute-0 nova_compute[270693]: 2025-11-24 18:44:07.553 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:44:07 compute-0 nova_compute[270693]: 2025-11-24 18:44:07.553 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 18:44:07 compute-0 nova_compute[270693]: 2025-11-24 18:44:07.554 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:44:07 compute-0 nova_compute[270693]: 2025-11-24 18:44:07.581 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:44:07 compute-0 nova_compute[270693]: 2025-11-24 18:44:07.582 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:44:07 compute-0 nova_compute[270693]: 2025-11-24 18:44:07.582 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:44:07 compute-0 nova_compute[270693]: 2025-11-24 18:44:07.582 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 18:44:07 compute-0 nova_compute[270693]: 2025-11-24 18:44:07.583 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:44:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:44:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 18:44:07 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2240237588' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:44:07 compute-0 nova_compute[270693]: 2025-11-24 18:44:07.979 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.396s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:44:08 compute-0 nova_compute[270693]: 2025-11-24 18:44:08.145 270697 WARNING nova.virt.libvirt.driver [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 18:44:08 compute-0 nova_compute[270693]: 2025-11-24 18:44:08.147 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5144MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 18:44:08 compute-0 nova_compute[270693]: 2025-11-24 18:44:08.147 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:44:08 compute-0 nova_compute[270693]: 2025-11-24 18:44:08.148 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:44:08 compute-0 nova_compute[270693]: 2025-11-24 18:44:08.241 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 18:44:08 compute-0 nova_compute[270693]: 2025-11-24 18:44:08.241 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 18:44:08 compute-0 nova_compute[270693]: 2025-11-24 18:44:08.255 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:44:08 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 18:44:08 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1767890328' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:44:08 compute-0 nova_compute[270693]: 2025-11-24 18:44:08.666 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:44:08 compute-0 nova_compute[270693]: 2025-11-24 18:44:08.671 270697 DEBUG nova.compute.provider_tree [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed in ProviderTree for provider: d1cce7ec-de83-4810-91f8-1852891da8a6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 18:44:08 compute-0 ceph-mon[74927]: pgmap v847: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:08 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2240237588' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:44:08 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1767890328' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:44:08 compute-0 nova_compute[270693]: 2025-11-24 18:44:08.704 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed for provider d1cce7ec-de83-4810-91f8-1852891da8a6 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 18:44:08 compute-0 nova_compute[270693]: 2025-11-24 18:44:08.705 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 18:44:08 compute-0 nova_compute[270693]: 2025-11-24 18:44:08.706 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:44:09 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v848: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:10 compute-0 ceph-mon[74927]: pgmap v848: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:11 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v849: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:12 compute-0 ceph-mon[74927]: pgmap v849: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:44:13 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v850: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:14 compute-0 ceph-mon[74927]: pgmap v850: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:15 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v851: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:16 compute-0 ceph-mon[74927]: pgmap v851: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:17 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v852: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:44:18 compute-0 ceph-mon[74927]: pgmap v852: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:19 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v853: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:20 compute-0 ceph-mon[74927]: pgmap v853: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:21 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v854: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:21 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Nov 24 18:44:21 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4185481522' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 24 18:44:21 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14355 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 24 18:44:21 compute-0 ceph-mgr[75218]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 24 18:44:21 compute-0 ceph-mgr[75218]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 24 18:44:21 compute-0 ceph-mon[74927]: from='client.? 192.168.122.10:0/4185481522' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 24 18:44:22 compute-0 ceph-mon[74927]: pgmap v854: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:22 compute-0 ceph-mon[74927]: from='client.14355 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 24 18:44:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:44:22.737 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:44:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:44:22.738 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:44:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:44:22.738 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:44:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:44:23 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v855: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:24 compute-0 ceph-mon[74927]: pgmap v855: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:25 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v856: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:26 compute-0 podman[272108]: 2025-11-24 18:44:26.025833537 +0000 UTC m=+0.112894898 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 24 18:44:26 compute-0 ceph-mon[74927]: pgmap v856: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:27 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v857: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:44:27 compute-0 podman[272135]: 2025-11-24 18:44:27.959764097 +0000 UTC m=+0.049623283 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 24 18:44:27 compute-0 podman[272136]: 2025-11-24 18:44:27.959764167 +0000 UTC m=+0.047158335 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 18:44:28 compute-0 ceph-mon[74927]: pgmap v857: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:29 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v858: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:29 compute-0 sudo[272169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:44:29 compute-0 sudo[272169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:44:29 compute-0 sudo[272169]: pam_unix(sudo:session): session closed for user root
Nov 24 18:44:29 compute-0 sudo[272194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:44:29 compute-0 sudo[272194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:44:29 compute-0 sudo[272194]: pam_unix(sudo:session): session closed for user root
Nov 24 18:44:29 compute-0 sudo[272219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:44:29 compute-0 sudo[272219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:44:29 compute-0 sudo[272219]: pam_unix(sudo:session): session closed for user root
Nov 24 18:44:29 compute-0 sudo[272244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 18:44:29 compute-0 sudo[272244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:44:30 compute-0 sudo[272244]: pam_unix(sudo:session): session closed for user root
Nov 24 18:44:30 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:44:30 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:44:30 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:44:30 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:44:30 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:44:30 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:44:30 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 58da8b5d-fc0a-46f0-a2b6-f460ed31702b does not exist
Nov 24 18:44:30 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 1031a4ad-3647-48f6-9bbc-bad4af908c29 does not exist
Nov 24 18:44:30 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev b9c937fe-7bc2-4be0-9803-5933326f8ed8 does not exist
Nov 24 18:44:30 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 18:44:30 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:44:30 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 18:44:30 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:44:30 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:44:30 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:44:30 compute-0 sudo[272301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:44:30 compute-0 sudo[272301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:44:30 compute-0 sudo[272301]: pam_unix(sudo:session): session closed for user root
Nov 24 18:44:30 compute-0 sudo[272326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:44:30 compute-0 sudo[272326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:44:30 compute-0 sudo[272326]: pam_unix(sudo:session): session closed for user root
Nov 24 18:44:30 compute-0 sudo[272351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:44:30 compute-0 sudo[272351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:44:30 compute-0 sudo[272351]: pam_unix(sudo:session): session closed for user root
Nov 24 18:44:30 compute-0 sudo[272376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 18:44:30 compute-0 sudo[272376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:44:30 compute-0 ceph-mon[74927]: pgmap v858: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:30 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:44:30 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:44:30 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:44:30 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:44:30 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:44:30 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:44:31 compute-0 podman[272440]: 2025-11-24 18:44:31.143340845 +0000 UTC m=+0.059002915 container create 5321505cbb2bf5cd61b593c95ffd92e5b81704438f0a6b823fb8612f28faee87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cartwright, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 24 18:44:31 compute-0 systemd[1]: Started libpod-conmon-5321505cbb2bf5cd61b593c95ffd92e5b81704438f0a6b823fb8612f28faee87.scope.
Nov 24 18:44:31 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:44:31 compute-0 podman[272440]: 2025-11-24 18:44:31.123464576 +0000 UTC m=+0.039126686 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:44:31 compute-0 podman[272440]: 2025-11-24 18:44:31.226027708 +0000 UTC m=+0.141689828 container init 5321505cbb2bf5cd61b593c95ffd92e5b81704438f0a6b823fb8612f28faee87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 18:44:31 compute-0 podman[272440]: 2025-11-24 18:44:31.234631081 +0000 UTC m=+0.150293171 container start 5321505cbb2bf5cd61b593c95ffd92e5b81704438f0a6b823fb8612f28faee87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 18:44:31 compute-0 podman[272440]: 2025-11-24 18:44:31.237711964 +0000 UTC m=+0.153374044 container attach 5321505cbb2bf5cd61b593c95ffd92e5b81704438f0a6b823fb8612f28faee87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 18:44:31 compute-0 distracted_cartwright[272456]: 167 167
Nov 24 18:44:31 compute-0 systemd[1]: libpod-5321505cbb2bf5cd61b593c95ffd92e5b81704438f0a6b823fb8612f28faee87.scope: Deactivated successfully.
Nov 24 18:44:31 compute-0 podman[272440]: 2025-11-24 18:44:31.241673258 +0000 UTC m=+0.157335338 container died 5321505cbb2bf5cd61b593c95ffd92e5b81704438f0a6b823fb8612f28faee87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 24 18:44:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f02e36298d30475078fd53618873cabf858c0762b231871f1f3305251219437-merged.mount: Deactivated successfully.
Nov 24 18:44:31 compute-0 podman[272440]: 2025-11-24 18:44:31.279734867 +0000 UTC m=+0.195396947 container remove 5321505cbb2bf5cd61b593c95ffd92e5b81704438f0a6b823fb8612f28faee87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cartwright, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 18:44:31 compute-0 systemd[1]: libpod-conmon-5321505cbb2bf5cd61b593c95ffd92e5b81704438f0a6b823fb8612f28faee87.scope: Deactivated successfully.
Nov 24 18:44:31 compute-0 podman[272479]: 2025-11-24 18:44:31.464655665 +0000 UTC m=+0.045183239 container create d9e9773e6f18e1f9fe42b6b5a106bc768dcd936421971f276231fed51bee07be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 18:44:31 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v859: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:31 compute-0 systemd[1]: Started libpod-conmon-d9e9773e6f18e1f9fe42b6b5a106bc768dcd936421971f276231fed51bee07be.scope.
Nov 24 18:44:31 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2be206f0c7627f07c2963763d923d1f9dd9f028230edbe9db35054648491381a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2be206f0c7627f07c2963763d923d1f9dd9f028230edbe9db35054648491381a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2be206f0c7627f07c2963763d923d1f9dd9f028230edbe9db35054648491381a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2be206f0c7627f07c2963763d923d1f9dd9f028230edbe9db35054648491381a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2be206f0c7627f07c2963763d923d1f9dd9f028230edbe9db35054648491381a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:44:31 compute-0 podman[272479]: 2025-11-24 18:44:31.447891719 +0000 UTC m=+0.028419303 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:44:31 compute-0 podman[272479]: 2025-11-24 18:44:31.552809947 +0000 UTC m=+0.133337531 container init d9e9773e6f18e1f9fe42b6b5a106bc768dcd936421971f276231fed51bee07be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 24 18:44:31 compute-0 podman[272479]: 2025-11-24 18:44:31.564639116 +0000 UTC m=+0.145166710 container start d9e9773e6f18e1f9fe42b6b5a106bc768dcd936421971f276231fed51bee07be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 24 18:44:31 compute-0 podman[272479]: 2025-11-24 18:44:31.568334484 +0000 UTC m=+0.148862078 container attach d9e9773e6f18e1f9fe42b6b5a106bc768dcd936421971f276231fed51bee07be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:44:32 compute-0 magical_buck[272495]: --> passed data devices: 0 physical, 3 LVM
Nov 24 18:44:32 compute-0 magical_buck[272495]: --> relative data size: 1.0
Nov 24 18:44:32 compute-0 magical_buck[272495]: --> All data devices are unavailable
Nov 24 18:44:32 compute-0 systemd[1]: libpod-d9e9773e6f18e1f9fe42b6b5a106bc768dcd936421971f276231fed51bee07be.scope: Deactivated successfully.
Nov 24 18:44:32 compute-0 podman[272479]: 2025-11-24 18:44:32.673738284 +0000 UTC m=+1.254265898 container died d9e9773e6f18e1f9fe42b6b5a106bc768dcd936421971f276231fed51bee07be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_buck, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:44:32 compute-0 systemd[1]: libpod-d9e9773e6f18e1f9fe42b6b5a106bc768dcd936421971f276231fed51bee07be.scope: Consumed 1.062s CPU time.
Nov 24 18:44:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-2be206f0c7627f07c2963763d923d1f9dd9f028230edbe9db35054648491381a-merged.mount: Deactivated successfully.
Nov 24 18:44:32 compute-0 podman[272479]: 2025-11-24 18:44:32.736466156 +0000 UTC m=+1.316993730 container remove d9e9773e6f18e1f9fe42b6b5a106bc768dcd936421971f276231fed51bee07be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_buck, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 24 18:44:32 compute-0 systemd[1]: libpod-conmon-d9e9773e6f18e1f9fe42b6b5a106bc768dcd936421971f276231fed51bee07be.scope: Deactivated successfully.
Nov 24 18:44:32 compute-0 sudo[272376]: pam_unix(sudo:session): session closed for user root
Nov 24 18:44:32 compute-0 ceph-mon[74927]: pgmap v859: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:32 compute-0 sudo[272536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:44:32 compute-0 sudo[272536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:44:32 compute-0 sudo[272536]: pam_unix(sudo:session): session closed for user root
Nov 24 18:44:32 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:44:32 compute-0 sudo[272561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:44:32 compute-0 sudo[272561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:44:32 compute-0 sudo[272561]: pam_unix(sudo:session): session closed for user root
Nov 24 18:44:32 compute-0 sudo[272586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:44:32 compute-0 sudo[272586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:44:32 compute-0 sudo[272586]: pam_unix(sudo:session): session closed for user root
Nov 24 18:44:32 compute-0 sudo[272611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- lvm list --format json
Nov 24 18:44:32 compute-0 sudo[272611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:44:33 compute-0 podman[272675]: 2025-11-24 18:44:33.274699059 +0000 UTC m=+0.048866825 container create 9c2d88ba4f939b678a70dcd1d7bcfb2831a1176c1d34e2db948f0101bcec6ea7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_feynman, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 18:44:33 compute-0 systemd[1]: Started libpod-conmon-9c2d88ba4f939b678a70dcd1d7bcfb2831a1176c1d34e2db948f0101bcec6ea7.scope.
Nov 24 18:44:33 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:44:33 compute-0 podman[272675]: 2025-11-24 18:44:33.25142317 +0000 UTC m=+0.025590976 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:44:33 compute-0 podman[272675]: 2025-11-24 18:44:33.353839558 +0000 UTC m=+0.128007354 container init 9c2d88ba4f939b678a70dcd1d7bcfb2831a1176c1d34e2db948f0101bcec6ea7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_feynman, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 24 18:44:33 compute-0 podman[272675]: 2025-11-24 18:44:33.360149837 +0000 UTC m=+0.134317593 container start 9c2d88ba4f939b678a70dcd1d7bcfb2831a1176c1d34e2db948f0101bcec6ea7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_feynman, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 24 18:44:33 compute-0 podman[272675]: 2025-11-24 18:44:33.363741242 +0000 UTC m=+0.137909018 container attach 9c2d88ba4f939b678a70dcd1d7bcfb2831a1176c1d34e2db948f0101bcec6ea7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_feynman, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:44:33 compute-0 elated_feynman[272692]: 167 167
Nov 24 18:44:33 compute-0 systemd[1]: libpod-9c2d88ba4f939b678a70dcd1d7bcfb2831a1176c1d34e2db948f0101bcec6ea7.scope: Deactivated successfully.
Nov 24 18:44:33 compute-0 podman[272675]: 2025-11-24 18:44:33.365267828 +0000 UTC m=+0.139435594 container died 9c2d88ba4f939b678a70dcd1d7bcfb2831a1176c1d34e2db948f0101bcec6ea7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:44:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e8fc0cebfaa64c2359cce2dfeef623139b32983338369987be83ad63c429712-merged.mount: Deactivated successfully.
Nov 24 18:44:33 compute-0 podman[272675]: 2025-11-24 18:44:33.400151112 +0000 UTC m=+0.174318868 container remove 9c2d88ba4f939b678a70dcd1d7bcfb2831a1176c1d34e2db948f0101bcec6ea7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:44:33 compute-0 systemd[1]: libpod-conmon-9c2d88ba4f939b678a70dcd1d7bcfb2831a1176c1d34e2db948f0101bcec6ea7.scope: Deactivated successfully.
Nov 24 18:44:33 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v860: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:33 compute-0 podman[272716]: 2025-11-24 18:44:33.57195503 +0000 UTC m=+0.043395576 container create 382e47654ed58ca5dcae8649d708e546bc0ef6596051e5eac2b5a297be9df260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_allen, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 24 18:44:33 compute-0 systemd[1]: Started libpod-conmon-382e47654ed58ca5dcae8649d708e546bc0ef6596051e5eac2b5a297be9df260.scope.
Nov 24 18:44:33 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:44:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/559fdc8f85f1d76d45f7395334b6acb0dba293833daa857b654b03482f2d671e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:44:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/559fdc8f85f1d76d45f7395334b6acb0dba293833daa857b654b03482f2d671e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:44:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/559fdc8f85f1d76d45f7395334b6acb0dba293833daa857b654b03482f2d671e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:44:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/559fdc8f85f1d76d45f7395334b6acb0dba293833daa857b654b03482f2d671e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:44:33 compute-0 podman[272716]: 2025-11-24 18:44:33.554398245 +0000 UTC m=+0.025838791 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:44:33 compute-0 podman[272716]: 2025-11-24 18:44:33.658653398 +0000 UTC m=+0.130093944 container init 382e47654ed58ca5dcae8649d708e546bc0ef6596051e5eac2b5a297be9df260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_allen, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:44:33 compute-0 podman[272716]: 2025-11-24 18:44:33.667359183 +0000 UTC m=+0.138799719 container start 382e47654ed58ca5dcae8649d708e546bc0ef6596051e5eac2b5a297be9df260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_allen, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 24 18:44:33 compute-0 podman[272716]: 2025-11-24 18:44:33.671317867 +0000 UTC m=+0.142758413 container attach 382e47654ed58ca5dcae8649d708e546bc0ef6596051e5eac2b5a297be9df260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Nov 24 18:44:34 compute-0 angry_allen[272732]: {
Nov 24 18:44:34 compute-0 angry_allen[272732]:     "0": [
Nov 24 18:44:34 compute-0 angry_allen[272732]:         {
Nov 24 18:44:34 compute-0 angry_allen[272732]:             "devices": [
Nov 24 18:44:34 compute-0 angry_allen[272732]:                 "/dev/loop3"
Nov 24 18:44:34 compute-0 angry_allen[272732]:             ],
Nov 24 18:44:34 compute-0 angry_allen[272732]:             "lv_name": "ceph_lv0",
Nov 24 18:44:34 compute-0 angry_allen[272732]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:44:34 compute-0 angry_allen[272732]:             "lv_size": "21470642176",
Nov 24 18:44:34 compute-0 angry_allen[272732]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:44:34 compute-0 angry_allen[272732]:             "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:44:34 compute-0 angry_allen[272732]:             "name": "ceph_lv0",
Nov 24 18:44:34 compute-0 angry_allen[272732]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:44:34 compute-0 angry_allen[272732]:             "tags": {
Nov 24 18:44:34 compute-0 angry_allen[272732]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:44:34 compute-0 angry_allen[272732]:                 "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:44:34 compute-0 angry_allen[272732]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:44:34 compute-0 angry_allen[272732]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:44:34 compute-0 angry_allen[272732]:                 "ceph.cluster_name": "ceph",
Nov 24 18:44:34 compute-0 angry_allen[272732]:                 "ceph.crush_device_class": "",
Nov 24 18:44:34 compute-0 angry_allen[272732]:                 "ceph.encrypted": "0",
Nov 24 18:44:34 compute-0 angry_allen[272732]:                 "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:44:34 compute-0 angry_allen[272732]:                 "ceph.osd_id": "0",
Nov 24 18:44:34 compute-0 angry_allen[272732]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:44:34 compute-0 angry_allen[272732]:                 "ceph.type": "block",
Nov 24 18:44:34 compute-0 angry_allen[272732]:                 "ceph.vdo": "0"
Nov 24 18:44:34 compute-0 angry_allen[272732]:             },
Nov 24 18:44:34 compute-0 angry_allen[272732]:             "type": "block",
Nov 24 18:44:34 compute-0 angry_allen[272732]:             "vg_name": "ceph_vg0"
Nov 24 18:44:34 compute-0 angry_allen[272732]:         }
Nov 24 18:44:34 compute-0 angry_allen[272732]:     ],
Nov 24 18:44:34 compute-0 angry_allen[272732]:     "1": [
Nov 24 18:44:34 compute-0 angry_allen[272732]:         {
Nov 24 18:44:34 compute-0 angry_allen[272732]:             "devices": [
Nov 24 18:44:34 compute-0 angry_allen[272732]:                 "/dev/loop4"
Nov 24 18:44:34 compute-0 angry_allen[272732]:             ],
Nov 24 18:44:34 compute-0 angry_allen[272732]:             "lv_name": "ceph_lv1",
Nov 24 18:44:34 compute-0 angry_allen[272732]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:44:34 compute-0 angry_allen[272732]:             "lv_size": "21470642176",
Nov 24 18:44:34 compute-0 angry_allen[272732]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:44:34 compute-0 angry_allen[272732]:             "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:44:34 compute-0 angry_allen[272732]:             "name": "ceph_lv1",
Nov 24 18:44:34 compute-0 angry_allen[272732]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:44:34 compute-0 angry_allen[272732]:             "tags": {
Nov 24 18:44:34 compute-0 angry_allen[272732]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:44:34 compute-0 angry_allen[272732]:                 "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:44:34 compute-0 angry_allen[272732]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:44:34 compute-0 angry_allen[272732]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:44:34 compute-0 angry_allen[272732]:                 "ceph.cluster_name": "ceph",
Nov 24 18:44:34 compute-0 angry_allen[272732]:                 "ceph.crush_device_class": "",
Nov 24 18:44:34 compute-0 angry_allen[272732]:                 "ceph.encrypted": "0",
Nov 24 18:44:34 compute-0 angry_allen[272732]:                 "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:44:34 compute-0 angry_allen[272732]:                 "ceph.osd_id": "1",
Nov 24 18:44:34 compute-0 angry_allen[272732]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:44:34 compute-0 angry_allen[272732]:                 "ceph.type": "block",
Nov 24 18:44:34 compute-0 angry_allen[272732]:                 "ceph.vdo": "0"
Nov 24 18:44:34 compute-0 angry_allen[272732]:             },
Nov 24 18:44:34 compute-0 angry_allen[272732]:             "type": "block",
Nov 24 18:44:34 compute-0 angry_allen[272732]:             "vg_name": "ceph_vg1"
Nov 24 18:44:34 compute-0 angry_allen[272732]:         }
Nov 24 18:44:34 compute-0 angry_allen[272732]:     ],
Nov 24 18:44:34 compute-0 angry_allen[272732]:     "2": [
Nov 24 18:44:34 compute-0 angry_allen[272732]:         {
Nov 24 18:44:34 compute-0 angry_allen[272732]:             "devices": [
Nov 24 18:44:34 compute-0 angry_allen[272732]:                 "/dev/loop5"
Nov 24 18:44:34 compute-0 angry_allen[272732]:             ],
Nov 24 18:44:34 compute-0 angry_allen[272732]:             "lv_name": "ceph_lv2",
Nov 24 18:44:34 compute-0 angry_allen[272732]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:44:34 compute-0 angry_allen[272732]:             "lv_size": "21470642176",
Nov 24 18:44:34 compute-0 angry_allen[272732]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:44:34 compute-0 angry_allen[272732]:             "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:44:34 compute-0 angry_allen[272732]:             "name": "ceph_lv2",
Nov 24 18:44:34 compute-0 angry_allen[272732]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:44:34 compute-0 angry_allen[272732]:             "tags": {
Nov 24 18:44:34 compute-0 angry_allen[272732]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:44:34 compute-0 angry_allen[272732]:                 "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:44:34 compute-0 angry_allen[272732]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:44:34 compute-0 angry_allen[272732]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:44:34 compute-0 angry_allen[272732]:                 "ceph.cluster_name": "ceph",
Nov 24 18:44:34 compute-0 angry_allen[272732]:                 "ceph.crush_device_class": "",
Nov 24 18:44:34 compute-0 angry_allen[272732]:                 "ceph.encrypted": "0",
Nov 24 18:44:34 compute-0 angry_allen[272732]:                 "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:44:34 compute-0 angry_allen[272732]:                 "ceph.osd_id": "2",
Nov 24 18:44:34 compute-0 angry_allen[272732]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:44:34 compute-0 angry_allen[272732]:                 "ceph.type": "block",
Nov 24 18:44:34 compute-0 angry_allen[272732]:                 "ceph.vdo": "0"
Nov 24 18:44:34 compute-0 angry_allen[272732]:             },
Nov 24 18:44:34 compute-0 angry_allen[272732]:             "type": "block",
Nov 24 18:44:34 compute-0 angry_allen[272732]:             "vg_name": "ceph_vg2"
Nov 24 18:44:34 compute-0 angry_allen[272732]:         }
Nov 24 18:44:34 compute-0 angry_allen[272732]:     ]
Nov 24 18:44:34 compute-0 angry_allen[272732]: }
Nov 24 18:44:34 compute-0 systemd[1]: libpod-382e47654ed58ca5dcae8649d708e546bc0ef6596051e5eac2b5a297be9df260.scope: Deactivated successfully.
Nov 24 18:44:34 compute-0 podman[272741]: 2025-11-24 18:44:34.448462554 +0000 UTC m=+0.029936419 container died 382e47654ed58ca5dcae8649d708e546bc0ef6596051e5eac2b5a297be9df260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 18:44:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-559fdc8f85f1d76d45f7395334b6acb0dba293833daa857b654b03482f2d671e-merged.mount: Deactivated successfully.
Nov 24 18:44:34 compute-0 podman[272741]: 2025-11-24 18:44:34.517968605 +0000 UTC m=+0.099442430 container remove 382e47654ed58ca5dcae8649d708e546bc0ef6596051e5eac2b5a297be9df260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:44:34 compute-0 systemd[1]: libpod-conmon-382e47654ed58ca5dcae8649d708e546bc0ef6596051e5eac2b5a297be9df260.scope: Deactivated successfully.
Nov 24 18:44:34 compute-0 sudo[272611]: pam_unix(sudo:session): session closed for user root
Nov 24 18:44:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:44:34
Nov 24 18:44:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:44:34 compute-0 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 18:44:34 compute-0 ceph-mgr[75218]: [balancer INFO root] pools ['backups', 'images', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'vms']
Nov 24 18:44:34 compute-0 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 18:44:34 compute-0 sudo[272756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:44:34 compute-0 sudo[272756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:44:34 compute-0 sudo[272756]: pam_unix(sudo:session): session closed for user root
Nov 24 18:44:34 compute-0 sudo[272781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:44:34 compute-0 sudo[272781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:44:34 compute-0 sudo[272781]: pam_unix(sudo:session): session closed for user root
Nov 24 18:44:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:44:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:44:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:44:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:44:34 compute-0 sudo[272806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:44:34 compute-0 sudo[272806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:44:34 compute-0 sudo[272806]: pam_unix(sudo:session): session closed for user root
Nov 24 18:44:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:44:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:44:34 compute-0 sudo[272831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- raw list --format json
Nov 24 18:44:34 compute-0 sudo[272831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:44:34 compute-0 ceph-mon[74927]: pgmap v860: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:44:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:44:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:44:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:44:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:44:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:44:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:44:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:44:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:44:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:44:35 compute-0 podman[272895]: 2025-11-24 18:44:35.068304285 +0000 UTC m=+0.043475118 container create 94678dcdd1d6596d71562443f3c753438bfbb66ec76e45d16a1d68d217bdeef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 24 18:44:35 compute-0 systemd[1]: Started libpod-conmon-94678dcdd1d6596d71562443f3c753438bfbb66ec76e45d16a1d68d217bdeef8.scope.
Nov 24 18:44:35 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:44:35 compute-0 podman[272895]: 2025-11-24 18:44:35.14219088 +0000 UTC m=+0.117361723 container init 94678dcdd1d6596d71562443f3c753438bfbb66ec76e45d16a1d68d217bdeef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_faraday, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 18:44:35 compute-0 podman[272895]: 2025-11-24 18:44:35.049565212 +0000 UTC m=+0.024736055 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:44:35 compute-0 podman[272895]: 2025-11-24 18:44:35.149880302 +0000 UTC m=+0.125051135 container start 94678dcdd1d6596d71562443f3c753438bfbb66ec76e45d16a1d68d217bdeef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_faraday, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:44:35 compute-0 podman[272895]: 2025-11-24 18:44:35.153285482 +0000 UTC m=+0.128456355 container attach 94678dcdd1d6596d71562443f3c753438bfbb66ec76e45d16a1d68d217bdeef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 24 18:44:35 compute-0 intelligent_faraday[272910]: 167 167
Nov 24 18:44:35 compute-0 systemd[1]: libpod-94678dcdd1d6596d71562443f3c753438bfbb66ec76e45d16a1d68d217bdeef8.scope: Deactivated successfully.
Nov 24 18:44:35 compute-0 podman[272895]: 2025-11-24 18:44:35.154445819 +0000 UTC m=+0.129616652 container died 94678dcdd1d6596d71562443f3c753438bfbb66ec76e45d16a1d68d217bdeef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_faraday, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 24 18:44:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-07bb8898aa9cc22004bf81efa27d8557ac9740ab687b281b46a80de50f2e85c4-merged.mount: Deactivated successfully.
Nov 24 18:44:35 compute-0 podman[272895]: 2025-11-24 18:44:35.188671618 +0000 UTC m=+0.163842451 container remove 94678dcdd1d6596d71562443f3c753438bfbb66ec76e45d16a1d68d217bdeef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:44:35 compute-0 systemd[1]: libpod-conmon-94678dcdd1d6596d71562443f3c753438bfbb66ec76e45d16a1d68d217bdeef8.scope: Deactivated successfully.
Nov 24 18:44:35 compute-0 podman[272935]: 2025-11-24 18:44:35.366198511 +0000 UTC m=+0.044248346 container create e3d7f59c92bce1a3cf1870d8d0db96721eb6a613bb8557bd966cceb1f0c5191c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_morse, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:44:35 compute-0 systemd[1]: Started libpod-conmon-e3d7f59c92bce1a3cf1870d8d0db96721eb6a613bb8557bd966cceb1f0c5191c.scope.
Nov 24 18:44:35 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:44:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1880b6ebba683239790d61dc52ef69a3335fffeddbfaf28f0c056d4ec0378d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:44:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1880b6ebba683239790d61dc52ef69a3335fffeddbfaf28f0c056d4ec0378d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:44:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1880b6ebba683239790d61dc52ef69a3335fffeddbfaf28f0c056d4ec0378d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:44:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1880b6ebba683239790d61dc52ef69a3335fffeddbfaf28f0c056d4ec0378d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:44:35 compute-0 podman[272935]: 2025-11-24 18:44:35.346733301 +0000 UTC m=+0.024783176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:44:35 compute-0 podman[272935]: 2025-11-24 18:44:35.445657678 +0000 UTC m=+0.123707533 container init e3d7f59c92bce1a3cf1870d8d0db96721eb6a613bb8557bd966cceb1f0c5191c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 24 18:44:35 compute-0 podman[272935]: 2025-11-24 18:44:35.455105481 +0000 UTC m=+0.133155316 container start e3d7f59c92bce1a3cf1870d8d0db96721eb6a613bb8557bd966cceb1f0c5191c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_morse, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:44:35 compute-0 podman[272935]: 2025-11-24 18:44:35.45842507 +0000 UTC m=+0.136474935 container attach e3d7f59c92bce1a3cf1870d8d0db96721eb6a613bb8557bd966cceb1f0c5191c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 24 18:44:35 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v861: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:36 compute-0 stoic_morse[272952]: {
Nov 24 18:44:36 compute-0 stoic_morse[272952]:     "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 18:44:36 compute-0 stoic_morse[272952]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:44:36 compute-0 stoic_morse[272952]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 18:44:36 compute-0 stoic_morse[272952]:         "osd_id": 0,
Nov 24 18:44:36 compute-0 stoic_morse[272952]:         "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:44:36 compute-0 stoic_morse[272952]:         "type": "bluestore"
Nov 24 18:44:36 compute-0 stoic_morse[272952]:     },
Nov 24 18:44:36 compute-0 stoic_morse[272952]:     "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 18:44:36 compute-0 stoic_morse[272952]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:44:36 compute-0 stoic_morse[272952]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 18:44:36 compute-0 stoic_morse[272952]:         "osd_id": 1,
Nov 24 18:44:36 compute-0 stoic_morse[272952]:         "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:44:36 compute-0 stoic_morse[272952]:         "type": "bluestore"
Nov 24 18:44:36 compute-0 stoic_morse[272952]:     },
Nov 24 18:44:36 compute-0 stoic_morse[272952]:     "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 18:44:36 compute-0 stoic_morse[272952]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:44:36 compute-0 stoic_morse[272952]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 18:44:36 compute-0 stoic_morse[272952]:         "osd_id": 2,
Nov 24 18:44:36 compute-0 stoic_morse[272952]:         "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:44:36 compute-0 stoic_morse[272952]:         "type": "bluestore"
Nov 24 18:44:36 compute-0 stoic_morse[272952]:     }
Nov 24 18:44:36 compute-0 stoic_morse[272952]: }
Nov 24 18:44:36 compute-0 systemd[1]: libpod-e3d7f59c92bce1a3cf1870d8d0db96721eb6a613bb8557bd966cceb1f0c5191c.scope: Deactivated successfully.
Nov 24 18:44:36 compute-0 podman[272935]: 2025-11-24 18:44:36.395237218 +0000 UTC m=+1.073287063 container died e3d7f59c92bce1a3cf1870d8d0db96721eb6a613bb8557bd966cceb1f0c5191c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_morse, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:44:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1880b6ebba683239790d61dc52ef69a3335fffeddbfaf28f0c056d4ec0378d3-merged.mount: Deactivated successfully.
Nov 24 18:44:36 compute-0 podman[272935]: 2025-11-24 18:44:36.460807047 +0000 UTC m=+1.138856882 container remove e3d7f59c92bce1a3cf1870d8d0db96721eb6a613bb8557bd966cceb1f0c5191c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_morse, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:44:36 compute-0 systemd[1]: libpod-conmon-e3d7f59c92bce1a3cf1870d8d0db96721eb6a613bb8557bd966cceb1f0c5191c.scope: Deactivated successfully.
Nov 24 18:44:36 compute-0 sudo[272831]: pam_unix(sudo:session): session closed for user root
Nov 24 18:44:36 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:44:36 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:44:36 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:44:36 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:44:36 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 077ac888-13b5-40a3-84d7-c951d96eea43 does not exist
Nov 24 18:44:36 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 5607da0f-9176-4c43-ad5e-863604e8ce90 does not exist
Nov 24 18:44:36 compute-0 sudo[272999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:44:36 compute-0 sudo[272999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:44:36 compute-0 sudo[272999]: pam_unix(sudo:session): session closed for user root
Nov 24 18:44:36 compute-0 sudo[273024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:44:36 compute-0 sudo[273024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:44:36 compute-0 sudo[273024]: pam_unix(sudo:session): session closed for user root
Nov 24 18:44:36 compute-0 ceph-mon[74927]: pgmap v861: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:36 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:44:36 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:44:37 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v862: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:44:38 compute-0 ceph-mon[74927]: pgmap v862: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:39 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v863: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:40 compute-0 ceph-mon[74927]: pgmap v863: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:41 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v864: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:44:42 compute-0 ceph-mon[74927]: pgmap v864: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:44:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:44:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:44:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:44:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:44:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:44:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:44:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:44:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:44:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:44:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:44:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:44:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 18:44:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:44:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:44:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:44:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 18:44:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:44:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 18:44:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:44:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:44:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:44:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 18:44:43 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v865: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:44 compute-0 ceph-mon[74927]: pgmap v865: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:45 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v866: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:46 compute-0 ceph-mon[74927]: pgmap v866: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:47 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v867: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:44:48 compute-0 ceph-mon[74927]: pgmap v867: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:49 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v868: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:50 compute-0 ceph-mon[74927]: pgmap v868: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:51 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v869: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:44:52 compute-0 ceph-mon[74927]: pgmap v869: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:53 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v870: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:54 compute-0 ceph-mon[74927]: pgmap v870: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:55 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v871: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:56 compute-0 ceph-mon[74927]: pgmap v871: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:57 compute-0 podman[273049]: 2025-11-24 18:44:57.021605895 +0000 UTC m=+0.115582821 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 18:44:57 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v872: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:44:58 compute-0 ceph-mon[74927]: pgmap v872: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:58 compute-0 podman[273075]: 2025-11-24 18:44:58.947769672 +0000 UTC m=+0.047205216 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 24 18:44:59 compute-0 podman[273076]: 2025-11-24 18:44:59.282744244 +0000 UTC m=+0.377580109 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 24 18:44:59 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v873: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:44:59 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Nov 24 18:44:59 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:44:59.909087) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 18:44:59 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Nov 24 18:44:59 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009899909123, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2054, "num_deletes": 251, "total_data_size": 3503925, "memory_usage": 3559160, "flush_reason": "Manual Compaction"}
Nov 24 18:44:59 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Nov 24 18:44:59 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:44:59.925 179763 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:2b:64', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'fa:26:5b:32:fa:ba'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 18:44:59 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:44:59.926 179763 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 18:44:59 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:44:59.926 179763 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=302e9f34-0427-4ff9-a29b-2fc7b5250666, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 18:44:59 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009899934241, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 3428463, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16333, "largest_seqno": 18386, "table_properties": {"data_size": 3419065, "index_size": 5956, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18447, "raw_average_key_size": 19, "raw_value_size": 3400448, "raw_average_value_size": 3656, "num_data_blocks": 270, "num_entries": 930, "num_filter_entries": 930, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764009668, "oldest_key_time": 1764009668, "file_creation_time": 1764009899, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Nov 24 18:44:59 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 25212 microseconds, and 7206 cpu microseconds.
Nov 24 18:44:59 compute-0 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 18:44:59 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:44:59.934297) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 3428463 bytes OK
Nov 24 18:44:59 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:44:59.934315) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Nov 24 18:44:59 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:44:59.936073) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Nov 24 18:44:59 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:44:59.936090) EVENT_LOG_v1 {"time_micros": 1764009899936085, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 18:44:59 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:44:59.936106) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 18:44:59 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3495320, prev total WAL file size 3495320, number of live WAL files 2.
Nov 24 18:44:59 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:44:59 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:44:59.937199) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Nov 24 18:44:59 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 18:44:59 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(3348KB)], [38(7513KB)]
Nov 24 18:44:59 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009899937237, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 11122040, "oldest_snapshot_seqno": -1}
Nov 24 18:44:59 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4416 keys, 9362389 bytes, temperature: kUnknown
Nov 24 18:44:59 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009899990774, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 9362389, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9329186, "index_size": 21061, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11077, "raw_key_size": 106777, "raw_average_key_size": 24, "raw_value_size": 9245638, "raw_average_value_size": 2093, "num_data_blocks": 896, "num_entries": 4416, "num_filter_entries": 4416, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008324, "oldest_key_time": 0, "file_creation_time": 1764009899, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Nov 24 18:44:59 compute-0 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 18:44:59 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:44:59.991114) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 9362389 bytes
Nov 24 18:44:59 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:44:59.992586) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 207.4 rd, 174.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.3 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(6.0) write-amplify(2.7) OK, records in: 4930, records dropped: 514 output_compression: NoCompression
Nov 24 18:44:59 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:44:59.992601) EVENT_LOG_v1 {"time_micros": 1764009899992594, "job": 18, "event": "compaction_finished", "compaction_time_micros": 53636, "compaction_time_cpu_micros": 20832, "output_level": 6, "num_output_files": 1, "total_output_size": 9362389, "num_input_records": 4930, "num_output_records": 4416, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 18:44:59 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:44:59 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009899993180, "job": 18, "event": "table_file_deletion", "file_number": 40}
Nov 24 18:44:59 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:44:59 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009899994291, "job": 18, "event": "table_file_deletion", "file_number": 38}
Nov 24 18:44:59 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:44:59.937071) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:44:59 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:44:59.994376) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:44:59 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:44:59.994381) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:44:59 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:44:59.994382) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:44:59 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:44:59.994384) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:44:59 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:44:59.994386) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:45:00 compute-0 ceph-mon[74927]: pgmap v873: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:01 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v874: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:45:02 compute-0 ceph-mon[74927]: pgmap v874: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:03 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v875: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:45:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:45:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:45:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:45:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:45:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:45:04 compute-0 ceph-mon[74927]: pgmap v875: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:05 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v876: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:06 compute-0 ceph-mon[74927]: pgmap v876: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:07 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v877: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:45:08 compute-0 nova_compute[270693]: 2025-11-24 18:45:08.697 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:45:08 compute-0 nova_compute[270693]: 2025-11-24 18:45:08.722 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:45:08 compute-0 nova_compute[270693]: 2025-11-24 18:45:08.722 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 18:45:08 compute-0 nova_compute[270693]: 2025-11-24 18:45:08.722 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 18:45:08 compute-0 nova_compute[270693]: 2025-11-24 18:45:08.745 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 18:45:08 compute-0 nova_compute[270693]: 2025-11-24 18:45:08.745 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:45:08 compute-0 nova_compute[270693]: 2025-11-24 18:45:08.746 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 18:45:08 compute-0 nova_compute[270693]: 2025-11-24 18:45:08.746 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:45:08 compute-0 nova_compute[270693]: 2025-11-24 18:45:08.775 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:45:08 compute-0 nova_compute[270693]: 2025-11-24 18:45:08.776 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:45:08 compute-0 nova_compute[270693]: 2025-11-24 18:45:08.777 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:45:08 compute-0 nova_compute[270693]: 2025-11-24 18:45:08.777 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 18:45:08 compute-0 nova_compute[270693]: 2025-11-24 18:45:08.778 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:45:08 compute-0 ceph-mon[74927]: pgmap v877: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 18:45:09 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3309010706' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:45:09 compute-0 nova_compute[270693]: 2025-11-24 18:45:09.196 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:45:09 compute-0 nova_compute[270693]: 2025-11-24 18:45:09.376 270697 WARNING nova.virt.libvirt.driver [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 18:45:09 compute-0 nova_compute[270693]: 2025-11-24 18:45:09.377 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5174MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 18:45:09 compute-0 nova_compute[270693]: 2025-11-24 18:45:09.378 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:45:09 compute-0 nova_compute[270693]: 2025-11-24 18:45:09.378 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:45:09 compute-0 nova_compute[270693]: 2025-11-24 18:45:09.458 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 18:45:09 compute-0 nova_compute[270693]: 2025-11-24 18:45:09.458 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 18:45:09 compute-0 nova_compute[270693]: 2025-11-24 18:45:09.499 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:45:09 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v878: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 18:45:09 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/696101940' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:45:09 compute-0 nova_compute[270693]: 2025-11-24 18:45:09.936 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:45:09 compute-0 nova_compute[270693]: 2025-11-24 18:45:09.942 270697 DEBUG nova.compute.provider_tree [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed in ProviderTree for provider: d1cce7ec-de83-4810-91f8-1852891da8a6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 18:45:09 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3309010706' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:45:09 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/696101940' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:45:10 compute-0 nova_compute[270693]: 2025-11-24 18:45:10.345 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed for provider d1cce7ec-de83-4810-91f8-1852891da8a6 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 18:45:10 compute-0 nova_compute[270693]: 2025-11-24 18:45:10.347 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 18:45:10 compute-0 nova_compute[270693]: 2025-11-24 18:45:10.347 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.969s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:45:10 compute-0 ceph-mon[74927]: pgmap v878: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:11 compute-0 nova_compute[270693]: 2025-11-24 18:45:11.131 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:45:11 compute-0 nova_compute[270693]: 2025-11-24 18:45:11.131 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:45:11 compute-0 nova_compute[270693]: 2025-11-24 18:45:11.131 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:45:11 compute-0 nova_compute[270693]: 2025-11-24 18:45:11.132 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:45:11 compute-0 nova_compute[270693]: 2025-11-24 18:45:11.132 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:45:11 compute-0 nova_compute[270693]: 2025-11-24 18:45:11.132 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:45:11 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v879: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:45:12 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Nov 24 18:45:12 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:45:12.830445) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 18:45:12 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Nov 24 18:45:12 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009912830481, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 349, "num_deletes": 252, "total_data_size": 182597, "memory_usage": 189248, "flush_reason": "Manual Compaction"}
Nov 24 18:45:12 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Nov 24 18:45:12 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009912833810, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 180761, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18387, "largest_seqno": 18735, "table_properties": {"data_size": 178600, "index_size": 325, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5679, "raw_average_key_size": 19, "raw_value_size": 174338, "raw_average_value_size": 592, "num_data_blocks": 15, "num_entries": 294, "num_filter_entries": 294, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764009900, "oldest_key_time": 1764009900, "file_creation_time": 1764009912, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Nov 24 18:45:12 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 3395 microseconds, and 909 cpu microseconds.
Nov 24 18:45:12 compute-0 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 18:45:12 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:45:12.833842) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 180761 bytes OK
Nov 24 18:45:12 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:45:12.833856) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Nov 24 18:45:12 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:45:12.835575) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Nov 24 18:45:12 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:45:12.835585) EVENT_LOG_v1 {"time_micros": 1764009912835582, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 18:45:12 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:45:12.835596) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 18:45:12 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 180238, prev total WAL file size 180238, number of live WAL files 2.
Nov 24 18:45:12 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:45:12 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:45:12.835990) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353032' seq:72057594037927935, type:22 .. '6D67727374617400373535' seq:0, type:0; will stop at (end)
Nov 24 18:45:12 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 18:45:12 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(176KB)], [41(9142KB)]
Nov 24 18:45:12 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009912836040, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 9543150, "oldest_snapshot_seqno": -1}
Nov 24 18:45:12 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4199 keys, 6229651 bytes, temperature: kUnknown
Nov 24 18:45:12 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009912889852, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6229651, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6202453, "index_size": 15569, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10565, "raw_key_size": 102695, "raw_average_key_size": 24, "raw_value_size": 6127286, "raw_average_value_size": 1459, "num_data_blocks": 657, "num_entries": 4199, "num_filter_entries": 4199, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008324, "oldest_key_time": 0, "file_creation_time": 1764009912, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Nov 24 18:45:12 compute-0 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 18:45:12 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:45:12.890476) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6229651 bytes
Nov 24 18:45:12 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:45:12.892145) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 176.0 rd, 114.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 8.9 +0.0 blob) out(5.9 +0.0 blob), read-write-amplify(87.3) write-amplify(34.5) OK, records in: 4710, records dropped: 511 output_compression: NoCompression
Nov 24 18:45:12 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:45:12.892178) EVENT_LOG_v1 {"time_micros": 1764009912892164, "job": 20, "event": "compaction_finished", "compaction_time_micros": 54224, "compaction_time_cpu_micros": 32128, "output_level": 6, "num_output_files": 1, "total_output_size": 6229651, "num_input_records": 4710, "num_output_records": 4199, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 18:45:12 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:45:12 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009912893216, "job": 20, "event": "table_file_deletion", "file_number": 43}
Nov 24 18:45:12 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:45:12 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009912897148, "job": 20, "event": "table_file_deletion", "file_number": 41}
Nov 24 18:45:12 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:45:12.835871) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:45:12 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:45:12.897444) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:45:12 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:45:12.897452) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:45:12 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:45:12.897455) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:45:12 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:45:12.897458) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:45:12 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:45:12.897461) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:45:12 compute-0 ceph-mon[74927]: pgmap v879: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:13 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v880: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:14 compute-0 ceph-mon[74927]: pgmap v880: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:15 compute-0 rsyslogd[1008]: imjournal: 594 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 24 18:45:15 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v881: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:16 compute-0 ceph-mon[74927]: pgmap v881: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:17 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v882: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:45:19 compute-0 ceph-mon[74927]: pgmap v882: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 18:45:19 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1708866176' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 18:45:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 18:45:19 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1708866176' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 18:45:19 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v883: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:20 compute-0 ceph-mon[74927]: from='client.? 192.168.122.10:0/1708866176' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 18:45:20 compute-0 ceph-mon[74927]: from='client.? 192.168.122.10:0/1708866176' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 18:45:21 compute-0 ceph-mon[74927]: pgmap v883: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:21 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v884: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:45:22.739 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:45:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:45:22.740 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:45:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:45:22.740 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:45:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:45:23 compute-0 ceph-mon[74927]: pgmap v884: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:23 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v885: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:25 compute-0 ceph-mon[74927]: pgmap v885: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:25 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v886: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:27 compute-0 ceph-mon[74927]: pgmap v886: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:27 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v887: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:45:27 compute-0 podman[273159]: 2025-11-24 18:45:27.98027755 +0000 UTC m=+0.079030348 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 24 18:45:29 compute-0 ceph-mon[74927]: pgmap v887: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:29 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v888: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:29 compute-0 podman[273186]: 2025-11-24 18:45:29.992874459 +0000 UTC m=+0.090158700 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 24 18:45:30 compute-0 podman[273187]: 2025-11-24 18:45:30.02466265 +0000 UTC m=+0.107117501 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 18:45:31 compute-0 ceph-mon[74927]: pgmap v888: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:31 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v889: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:32 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:45:33 compute-0 ceph-mon[74927]: pgmap v889: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:33 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v890: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:45:34
Nov 24 18:45:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:45:34 compute-0 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 18:45:34 compute-0 ceph-mgr[75218]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'cephfs.cephfs.data', 'volumes', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'vms', '.rgw.root', 'default.rgw.control']
Nov 24 18:45:34 compute-0 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 18:45:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:45:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:45:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:45:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:45:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:45:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:45:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:45:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:45:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:45:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:45:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:45:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:45:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:45:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:45:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:45:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:45:35 compute-0 ceph-mon[74927]: pgmap v890: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:35 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v891: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:36 compute-0 sudo[273223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:45:36 compute-0 sudo[273223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:45:36 compute-0 sudo[273223]: pam_unix(sudo:session): session closed for user root
Nov 24 18:45:36 compute-0 sudo[273248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:45:36 compute-0 sudo[273248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:45:36 compute-0 sudo[273248]: pam_unix(sudo:session): session closed for user root
Nov 24 18:45:36 compute-0 sudo[273273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:45:36 compute-0 sudo[273273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:45:36 compute-0 sudo[273273]: pam_unix(sudo:session): session closed for user root
Nov 24 18:45:36 compute-0 sudo[273298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 18:45:36 compute-0 sudo[273298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:45:37 compute-0 ceph-mon[74927]: pgmap v891: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:37 compute-0 sudo[273298]: pam_unix(sudo:session): session closed for user root
Nov 24 18:45:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:45:37 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:45:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:45:37 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:45:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:45:37 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:45:37 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 411f0c84-1c9d-4dfe-b968-ee29d48aef52 does not exist
Nov 24 18:45:37 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 9468b2a7-fbbb-4a53-a948-14dc81bea8fd does not exist
Nov 24 18:45:37 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 2a808531-6f0a-41a5-9f6e-914fb8df27d1 does not exist
Nov 24 18:45:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 18:45:37 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:45:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 18:45:37 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:45:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:45:37 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:45:37 compute-0 sudo[273354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:45:37 compute-0 sudo[273354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:45:37 compute-0 sudo[273354]: pam_unix(sudo:session): session closed for user root
Nov 24 18:45:37 compute-0 sudo[273379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:45:37 compute-0 sudo[273379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:45:37 compute-0 sudo[273379]: pam_unix(sudo:session): session closed for user root
Nov 24 18:45:37 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v892: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:37 compute-0 sudo[273404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:45:37 compute-0 sudo[273404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:45:37 compute-0 sudo[273404]: pam_unix(sudo:session): session closed for user root
Nov 24 18:45:37 compute-0 sudo[273429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 18:45:37 compute-0 sudo[273429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:45:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:45:37 compute-0 podman[273495]: 2025-11-24 18:45:37.932634259 +0000 UTC m=+0.045384643 container create ebd3109ae11276de540c6175ee9d845083c63035ad2d8740056d4f5974839f43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 18:45:37 compute-0 systemd[1]: Started libpod-conmon-ebd3109ae11276de540c6175ee9d845083c63035ad2d8740056d4f5974839f43.scope.
Nov 24 18:45:37 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:45:38 compute-0 podman[273495]: 2025-11-24 18:45:37.910590788 +0000 UTC m=+0.023341252 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:45:38 compute-0 podman[273495]: 2025-11-24 18:45:38.007896257 +0000 UTC m=+0.120646691 container init ebd3109ae11276de540c6175ee9d845083c63035ad2d8740056d4f5974839f43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:45:38 compute-0 podman[273495]: 2025-11-24 18:45:38.014173935 +0000 UTC m=+0.126924329 container start ebd3109ae11276de540c6175ee9d845083c63035ad2d8740056d4f5974839f43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bohr, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 18:45:38 compute-0 podman[273495]: 2025-11-24 18:45:38.01693576 +0000 UTC m=+0.129686164 container attach ebd3109ae11276de540c6175ee9d845083c63035ad2d8740056d4f5974839f43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bohr, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:45:38 compute-0 hardcore_bohr[273512]: 167 167
Nov 24 18:45:38 compute-0 systemd[1]: libpod-ebd3109ae11276de540c6175ee9d845083c63035ad2d8740056d4f5974839f43.scope: Deactivated successfully.
Nov 24 18:45:38 compute-0 podman[273495]: 2025-11-24 18:45:38.018882856 +0000 UTC m=+0.131633250 container died ebd3109ae11276de540c6175ee9d845083c63035ad2d8740056d4f5974839f43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bohr, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 18:45:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-14ba781f9c67ad8886f58bfb7da2855477d5fdab092eab13ac5a69758b4143f6-merged.mount: Deactivated successfully.
Nov 24 18:45:38 compute-0 podman[273495]: 2025-11-24 18:45:38.056530376 +0000 UTC m=+0.169280760 container remove ebd3109ae11276de540c6175ee9d845083c63035ad2d8740056d4f5974839f43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bohr, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:45:38 compute-0 systemd[1]: libpod-conmon-ebd3109ae11276de540c6175ee9d845083c63035ad2d8740056d4f5974839f43.scope: Deactivated successfully.
Nov 24 18:45:38 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:45:38 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:45:38 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:45:38 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:45:38 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:45:38 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:45:38 compute-0 podman[273534]: 2025-11-24 18:45:38.216468773 +0000 UTC m=+0.041147393 container create cbe7adc79494e7dcc40c42914770b7888169b7404023906132a4652a8ca267bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 18:45:38 compute-0 systemd[1]: Started libpod-conmon-cbe7adc79494e7dcc40c42914770b7888169b7404023906132a4652a8ca267bb.scope.
Nov 24 18:45:38 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:45:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cb8a0a7fc4a1c810b5d94feed7a0d2c8dfc71af5ec80d7a58d7e4639d63cdbd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:45:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cb8a0a7fc4a1c810b5d94feed7a0d2c8dfc71af5ec80d7a58d7e4639d63cdbd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:45:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cb8a0a7fc4a1c810b5d94feed7a0d2c8dfc71af5ec80d7a58d7e4639d63cdbd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:45:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cb8a0a7fc4a1c810b5d94feed7a0d2c8dfc71af5ec80d7a58d7e4639d63cdbd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:45:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cb8a0a7fc4a1c810b5d94feed7a0d2c8dfc71af5ec80d7a58d7e4639d63cdbd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:45:38 compute-0 podman[273534]: 2025-11-24 18:45:38.200768163 +0000 UTC m=+0.025446813 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:45:38 compute-0 podman[273534]: 2025-11-24 18:45:38.298822729 +0000 UTC m=+0.123501439 container init cbe7adc79494e7dcc40c42914770b7888169b7404023906132a4652a8ca267bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_matsumoto, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:45:38 compute-0 podman[273534]: 2025-11-24 18:45:38.305741902 +0000 UTC m=+0.130420522 container start cbe7adc79494e7dcc40c42914770b7888169b7404023906132a4652a8ca267bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:45:38 compute-0 podman[273534]: 2025-11-24 18:45:38.308637861 +0000 UTC m=+0.133316481 container attach cbe7adc79494e7dcc40c42914770b7888169b7404023906132a4652a8ca267bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_matsumoto, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:45:39 compute-0 elastic_matsumoto[273550]: --> passed data devices: 0 physical, 3 LVM
Nov 24 18:45:39 compute-0 elastic_matsumoto[273550]: --> relative data size: 1.0
Nov 24 18:45:39 compute-0 elastic_matsumoto[273550]: --> All data devices are unavailable
Nov 24 18:45:39 compute-0 systemd[1]: libpod-cbe7adc79494e7dcc40c42914770b7888169b7404023906132a4652a8ca267bb.scope: Deactivated successfully.
Nov 24 18:45:39 compute-0 podman[273534]: 2025-11-24 18:45:39.3458165 +0000 UTC m=+1.170495110 container died cbe7adc79494e7dcc40c42914770b7888169b7404023906132a4652a8ca267bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 18:45:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-0cb8a0a7fc4a1c810b5d94feed7a0d2c8dfc71af5ec80d7a58d7e4639d63cdbd-merged.mount: Deactivated successfully.
Nov 24 18:45:39 compute-0 ceph-mon[74927]: pgmap v892: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:39 compute-0 podman[273534]: 2025-11-24 18:45:39.399807015 +0000 UTC m=+1.224485635 container remove cbe7adc79494e7dcc40c42914770b7888169b7404023906132a4652a8ca267bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_matsumoto, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 24 18:45:39 compute-0 systemd[1]: libpod-conmon-cbe7adc79494e7dcc40c42914770b7888169b7404023906132a4652a8ca267bb.scope: Deactivated successfully.
Nov 24 18:45:39 compute-0 sudo[273429]: pam_unix(sudo:session): session closed for user root
Nov 24 18:45:39 compute-0 sudo[273589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:45:39 compute-0 sudo[273589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:45:39 compute-0 sudo[273589]: pam_unix(sudo:session): session closed for user root
Nov 24 18:45:39 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v893: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:39 compute-0 sudo[273614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:45:39 compute-0 sudo[273614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:45:39 compute-0 sudo[273614]: pam_unix(sudo:session): session closed for user root
Nov 24 18:45:39 compute-0 sudo[273639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:45:39 compute-0 sudo[273639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:45:39 compute-0 sudo[273639]: pam_unix(sudo:session): session closed for user root
Nov 24 18:45:39 compute-0 sudo[273664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- lvm list --format json
Nov 24 18:45:39 compute-0 sudo[273664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:45:39 compute-0 podman[273731]: 2025-11-24 18:45:39.928849191 +0000 UTC m=+0.035440028 container create fdcf0bff18a02a4f706472abd50d2761eea0e6dadc6ff9adfa430fe1c4faef37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 18:45:39 compute-0 systemd[1]: Started libpod-conmon-fdcf0bff18a02a4f706472abd50d2761eea0e6dadc6ff9adfa430fe1c4faef37.scope.
Nov 24 18:45:39 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:45:40 compute-0 podman[273731]: 2025-11-24 18:45:39.999819908 +0000 UTC m=+0.106410765 container init fdcf0bff18a02a4f706472abd50d2761eea0e6dadc6ff9adfa430fe1c4faef37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wiles, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 24 18:45:40 compute-0 podman[273731]: 2025-11-24 18:45:40.006501095 +0000 UTC m=+0.113091932 container start fdcf0bff18a02a4f706472abd50d2761eea0e6dadc6ff9adfa430fe1c4faef37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wiles, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:45:40 compute-0 blissful_wiles[273747]: 167 167
Nov 24 18:45:40 compute-0 podman[273731]: 2025-11-24 18:45:40.010189412 +0000 UTC m=+0.116780249 container attach fdcf0bff18a02a4f706472abd50d2761eea0e6dadc6ff9adfa430fe1c4faef37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 24 18:45:40 compute-0 podman[273731]: 2025-11-24 18:45:39.914318068 +0000 UTC m=+0.020908925 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:45:40 compute-0 systemd[1]: libpod-fdcf0bff18a02a4f706472abd50d2761eea0e6dadc6ff9adfa430fe1c4faef37.scope: Deactivated successfully.
Nov 24 18:45:40 compute-0 podman[273731]: 2025-11-24 18:45:40.012422385 +0000 UTC m=+0.119013222 container died fdcf0bff18a02a4f706472abd50d2761eea0e6dadc6ff9adfa430fe1c4faef37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wiles, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:45:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-a420a30a07befc038b88c2fae075e7b515d41b445b663e51544e4f523dffe8f3-merged.mount: Deactivated successfully.
Nov 24 18:45:40 compute-0 podman[273731]: 2025-11-24 18:45:40.045352023 +0000 UTC m=+0.151942860 container remove fdcf0bff18a02a4f706472abd50d2761eea0e6dadc6ff9adfa430fe1c4faef37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 24 18:45:40 compute-0 systemd[1]: libpod-conmon-fdcf0bff18a02a4f706472abd50d2761eea0e6dadc6ff9adfa430fe1c4faef37.scope: Deactivated successfully.
Nov 24 18:45:40 compute-0 podman[273772]: 2025-11-24 18:45:40.199821292 +0000 UTC m=+0.037840495 container create 42d6467a7f3d932e10492d75c8c7158811dc85fea1be3c5f181d8542434e7809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bassi, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:45:40 compute-0 systemd[1]: Started libpod-conmon-42d6467a7f3d932e10492d75c8c7158811dc85fea1be3c5f181d8542434e7809.scope.
Nov 24 18:45:40 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:45:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/183a5550692f556bf0a09bc4832b46a36f14a6b3e4a4e51154624d26393a5584/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:45:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/183a5550692f556bf0a09bc4832b46a36f14a6b3e4a4e51154624d26393a5584/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:45:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/183a5550692f556bf0a09bc4832b46a36f14a6b3e4a4e51154624d26393a5584/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:45:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/183a5550692f556bf0a09bc4832b46a36f14a6b3e4a4e51154624d26393a5584/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:45:40 compute-0 podman[273772]: 2025-11-24 18:45:40.266134698 +0000 UTC m=+0.104153921 container init 42d6467a7f3d932e10492d75c8c7158811dc85fea1be3c5f181d8542434e7809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:45:40 compute-0 podman[273772]: 2025-11-24 18:45:40.27892926 +0000 UTC m=+0.116948503 container start 42d6467a7f3d932e10492d75c8c7158811dc85fea1be3c5f181d8542434e7809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:45:40 compute-0 podman[273772]: 2025-11-24 18:45:40.184884829 +0000 UTC m=+0.022904062 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:45:40 compute-0 podman[273772]: 2025-11-24 18:45:40.283036507 +0000 UTC m=+0.121055760 container attach 42d6467a7f3d932e10492d75c8c7158811dc85fea1be3c5f181d8542434e7809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 18:45:41 compute-0 nice_bassi[273788]: {
Nov 24 18:45:41 compute-0 nice_bassi[273788]:     "0": [
Nov 24 18:45:41 compute-0 nice_bassi[273788]:         {
Nov 24 18:45:41 compute-0 nice_bassi[273788]:             "devices": [
Nov 24 18:45:41 compute-0 nice_bassi[273788]:                 "/dev/loop3"
Nov 24 18:45:41 compute-0 nice_bassi[273788]:             ],
Nov 24 18:45:41 compute-0 nice_bassi[273788]:             "lv_name": "ceph_lv0",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:             "lv_size": "21470642176",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:             "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:             "name": "ceph_lv0",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:             "tags": {
Nov 24 18:45:41 compute-0 nice_bassi[273788]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:                 "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:                 "ceph.cluster_name": "ceph",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:                 "ceph.crush_device_class": "",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:                 "ceph.encrypted": "0",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:                 "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:                 "ceph.osd_id": "0",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:                 "ceph.type": "block",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:                 "ceph.vdo": "0"
Nov 24 18:45:41 compute-0 nice_bassi[273788]:             },
Nov 24 18:45:41 compute-0 nice_bassi[273788]:             "type": "block",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:             "vg_name": "ceph_vg0"
Nov 24 18:45:41 compute-0 nice_bassi[273788]:         }
Nov 24 18:45:41 compute-0 nice_bassi[273788]:     ],
Nov 24 18:45:41 compute-0 nice_bassi[273788]:     "1": [
Nov 24 18:45:41 compute-0 nice_bassi[273788]:         {
Nov 24 18:45:41 compute-0 nice_bassi[273788]:             "devices": [
Nov 24 18:45:41 compute-0 nice_bassi[273788]:                 "/dev/loop4"
Nov 24 18:45:41 compute-0 nice_bassi[273788]:             ],
Nov 24 18:45:41 compute-0 nice_bassi[273788]:             "lv_name": "ceph_lv1",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:             "lv_size": "21470642176",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:             "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:             "name": "ceph_lv1",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:             "tags": {
Nov 24 18:45:41 compute-0 nice_bassi[273788]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:                 "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:                 "ceph.cluster_name": "ceph",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:                 "ceph.crush_device_class": "",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:                 "ceph.encrypted": "0",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:                 "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:                 "ceph.osd_id": "1",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:                 "ceph.type": "block",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:                 "ceph.vdo": "0"
Nov 24 18:45:41 compute-0 nice_bassi[273788]:             },
Nov 24 18:45:41 compute-0 nice_bassi[273788]:             "type": "block",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:             "vg_name": "ceph_vg1"
Nov 24 18:45:41 compute-0 nice_bassi[273788]:         }
Nov 24 18:45:41 compute-0 nice_bassi[273788]:     ],
Nov 24 18:45:41 compute-0 nice_bassi[273788]:     "2": [
Nov 24 18:45:41 compute-0 nice_bassi[273788]:         {
Nov 24 18:45:41 compute-0 nice_bassi[273788]:             "devices": [
Nov 24 18:45:41 compute-0 nice_bassi[273788]:                 "/dev/loop5"
Nov 24 18:45:41 compute-0 nice_bassi[273788]:             ],
Nov 24 18:45:41 compute-0 nice_bassi[273788]:             "lv_name": "ceph_lv2",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:             "lv_size": "21470642176",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:             "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:             "name": "ceph_lv2",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:             "tags": {
Nov 24 18:45:41 compute-0 nice_bassi[273788]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:                 "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:                 "ceph.cluster_name": "ceph",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:                 "ceph.crush_device_class": "",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:                 "ceph.encrypted": "0",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:                 "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:                 "ceph.osd_id": "2",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:                 "ceph.type": "block",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:                 "ceph.vdo": "0"
Nov 24 18:45:41 compute-0 nice_bassi[273788]:             },
Nov 24 18:45:41 compute-0 nice_bassi[273788]:             "type": "block",
Nov 24 18:45:41 compute-0 nice_bassi[273788]:             "vg_name": "ceph_vg2"
Nov 24 18:45:41 compute-0 nice_bassi[273788]:         }
Nov 24 18:45:41 compute-0 nice_bassi[273788]:     ]
Nov 24 18:45:41 compute-0 nice_bassi[273788]: }
Nov 24 18:45:41 compute-0 systemd[1]: libpod-42d6467a7f3d932e10492d75c8c7158811dc85fea1be3c5f181d8542434e7809.scope: Deactivated successfully.
Nov 24 18:45:41 compute-0 podman[273772]: 2025-11-24 18:45:41.059694343 +0000 UTC m=+0.897713546 container died 42d6467a7f3d932e10492d75c8c7158811dc85fea1be3c5f181d8542434e7809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bassi, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:45:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-183a5550692f556bf0a09bc4832b46a36f14a6b3e4a4e51154624d26393a5584-merged.mount: Deactivated successfully.
Nov 24 18:45:41 compute-0 podman[273772]: 2025-11-24 18:45:41.268335711 +0000 UTC m=+1.106354954 container remove 42d6467a7f3d932e10492d75c8c7158811dc85fea1be3c5f181d8542434e7809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bassi, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:45:41 compute-0 systemd[1]: libpod-conmon-42d6467a7f3d932e10492d75c8c7158811dc85fea1be3c5f181d8542434e7809.scope: Deactivated successfully.
Nov 24 18:45:41 compute-0 sudo[273664]: pam_unix(sudo:session): session closed for user root
Nov 24 18:45:41 compute-0 sudo[273810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:45:41 compute-0 sudo[273810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:45:41 compute-0 sudo[273810]: pam_unix(sudo:session): session closed for user root
Nov 24 18:45:41 compute-0 ceph-mon[74927]: pgmap v893: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:41 compute-0 sudo[273835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:45:41 compute-0 sudo[273835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:45:41 compute-0 sudo[273835]: pam_unix(sudo:session): session closed for user root
Nov 24 18:45:41 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v894: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:41 compute-0 sudo[273860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:45:41 compute-0 sudo[273860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:45:41 compute-0 sudo[273860]: pam_unix(sudo:session): session closed for user root
Nov 24 18:45:41 compute-0 sudo[273885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- raw list --format json
Nov 24 18:45:41 compute-0 sudo[273885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:45:41 compute-0 podman[273950]: 2025-11-24 18:45:41.939391221 +0000 UTC m=+0.056039665 container create 501f763be93bcf20f6da8afb7822ad5de65e64145162427518bef4332d4aba31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_moser, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:45:41 compute-0 systemd[1]: Started libpod-conmon-501f763be93bcf20f6da8afb7822ad5de65e64145162427518bef4332d4aba31.scope.
Nov 24 18:45:42 compute-0 podman[273950]: 2025-11-24 18:45:41.914015271 +0000 UTC m=+0.030663735 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:45:42 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:45:42 compute-0 podman[273950]: 2025-11-24 18:45:42.039056365 +0000 UTC m=+0.155704819 container init 501f763be93bcf20f6da8afb7822ad5de65e64145162427518bef4332d4aba31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_moser, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 18:45:42 compute-0 podman[273950]: 2025-11-24 18:45:42.044651417 +0000 UTC m=+0.161299841 container start 501f763be93bcf20f6da8afb7822ad5de65e64145162427518bef4332d4aba31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_moser, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 24 18:45:42 compute-0 infallible_moser[273966]: 167 167
Nov 24 18:45:42 compute-0 systemd[1]: libpod-501f763be93bcf20f6da8afb7822ad5de65e64145162427518bef4332d4aba31.scope: Deactivated successfully.
Nov 24 18:45:42 compute-0 podman[273950]: 2025-11-24 18:45:42.06170051 +0000 UTC m=+0.178348954 container attach 501f763be93bcf20f6da8afb7822ad5de65e64145162427518bef4332d4aba31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_moser, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 18:45:42 compute-0 podman[273950]: 2025-11-24 18:45:42.062095609 +0000 UTC m=+0.178744063 container died 501f763be93bcf20f6da8afb7822ad5de65e64145162427518bef4332d4aba31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_moser, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:45:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-86aa914a4de60c8d1667448339822273255def061b7c5585e4330992b3cb7a51-merged.mount: Deactivated successfully.
Nov 24 18:45:42 compute-0 podman[273950]: 2025-11-24 18:45:42.205920497 +0000 UTC m=+0.322568931 container remove 501f763be93bcf20f6da8afb7822ad5de65e64145162427518bef4332d4aba31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 18:45:42 compute-0 systemd[1]: libpod-conmon-501f763be93bcf20f6da8afb7822ad5de65e64145162427518bef4332d4aba31.scope: Deactivated successfully.
Nov 24 18:45:42 compute-0 podman[273990]: 2025-11-24 18:45:42.366451528 +0000 UTC m=+0.041303986 container create c9dac9d4df657057640a0327a3ec2208c32e88eb8b6198a0f2904de2eee7a615 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:45:42 compute-0 systemd[1]: Started libpod-conmon-c9dac9d4df657057640a0327a3ec2208c32e88eb8b6198a0f2904de2eee7a615.scope.
Nov 24 18:45:42 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae8b84339254d873ad196c02f4e711550a5e9c9c0003ce01b6d10f27b93cda2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae8b84339254d873ad196c02f4e711550a5e9c9c0003ce01b6d10f27b93cda2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae8b84339254d873ad196c02f4e711550a5e9c9c0003ce01b6d10f27b93cda2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae8b84339254d873ad196c02f4e711550a5e9c9c0003ce01b6d10f27b93cda2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:45:42 compute-0 podman[273990]: 2025-11-24 18:45:42.44104847 +0000 UTC m=+0.115900928 container init c9dac9d4df657057640a0327a3ec2208c32e88eb8b6198a0f2904de2eee7a615 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_elgamal, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:45:42 compute-0 podman[273990]: 2025-11-24 18:45:42.346023806 +0000 UTC m=+0.020876294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:45:42 compute-0 podman[273990]: 2025-11-24 18:45:42.447459912 +0000 UTC m=+0.122312370 container start c9dac9d4df657057640a0327a3ec2208c32e88eb8b6198a0f2904de2eee7a615 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_elgamal, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 24 18:45:42 compute-0 podman[273990]: 2025-11-24 18:45:42.454694193 +0000 UTC m=+0.129546651 container attach c9dac9d4df657057640a0327a3ec2208c32e88eb8b6198a0f2904de2eee7a615 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_elgamal, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:45:42 compute-0 ceph-mon[74927]: pgmap v894: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:45:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:45:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:45:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:45:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:45:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:45:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:45:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:45:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:45:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:45:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:45:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:45:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:45:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 18:45:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:45:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:45:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:45:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 18:45:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:45:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 18:45:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:45:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:45:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:45:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 18:45:43 compute-0 boring_elgamal[274006]: {
Nov 24 18:45:43 compute-0 boring_elgamal[274006]:     "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 18:45:43 compute-0 boring_elgamal[274006]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:45:43 compute-0 boring_elgamal[274006]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 18:45:43 compute-0 boring_elgamal[274006]:         "osd_id": 0,
Nov 24 18:45:43 compute-0 boring_elgamal[274006]:         "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:45:43 compute-0 boring_elgamal[274006]:         "type": "bluestore"
Nov 24 18:45:43 compute-0 boring_elgamal[274006]:     },
Nov 24 18:45:43 compute-0 boring_elgamal[274006]:     "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 18:45:43 compute-0 boring_elgamal[274006]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:45:43 compute-0 boring_elgamal[274006]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 18:45:43 compute-0 boring_elgamal[274006]:         "osd_id": 1,
Nov 24 18:45:43 compute-0 boring_elgamal[274006]:         "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:45:43 compute-0 boring_elgamal[274006]:         "type": "bluestore"
Nov 24 18:45:43 compute-0 boring_elgamal[274006]:     },
Nov 24 18:45:43 compute-0 boring_elgamal[274006]:     "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 18:45:43 compute-0 boring_elgamal[274006]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:45:43 compute-0 boring_elgamal[274006]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 18:45:43 compute-0 boring_elgamal[274006]:         "osd_id": 2,
Nov 24 18:45:43 compute-0 boring_elgamal[274006]:         "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:45:43 compute-0 boring_elgamal[274006]:         "type": "bluestore"
Nov 24 18:45:43 compute-0 boring_elgamal[274006]:     }
Nov 24 18:45:43 compute-0 boring_elgamal[274006]: }
Nov 24 18:45:43 compute-0 systemd[1]: libpod-c9dac9d4df657057640a0327a3ec2208c32e88eb8b6198a0f2904de2eee7a615.scope: Deactivated successfully.
Nov 24 18:45:43 compute-0 podman[273990]: 2025-11-24 18:45:43.344202974 +0000 UTC m=+1.019055432 container died c9dac9d4df657057640a0327a3ec2208c32e88eb8b6198a0f2904de2eee7a615 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_elgamal, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 18:45:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ae8b84339254d873ad196c02f4e711550a5e9c9c0003ce01b6d10f27b93cda2-merged.mount: Deactivated successfully.
Nov 24 18:45:43 compute-0 podman[273990]: 2025-11-24 18:45:43.396221792 +0000 UTC m=+1.071074250 container remove c9dac9d4df657057640a0327a3ec2208c32e88eb8b6198a0f2904de2eee7a615 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_elgamal, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:45:43 compute-0 systemd[1]: libpod-conmon-c9dac9d4df657057640a0327a3ec2208c32e88eb8b6198a0f2904de2eee7a615.scope: Deactivated successfully.
Nov 24 18:45:43 compute-0 sudo[273885]: pam_unix(sudo:session): session closed for user root
Nov 24 18:45:43 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:45:43 compute-0 sshd-session[273793]: Invalid user ftpuser from 80.94.95.115 port 44320
Nov 24 18:45:43 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:45:43 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:45:43 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:45:43 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 71637212-7ee6-4e5c-9c1d-3a14e86fa32b does not exist
Nov 24 18:45:43 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 455f0bbb-053b-493c-920d-3499c2d03e2a does not exist
Nov 24 18:45:43 compute-0 sudo[274052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:45:43 compute-0 sudo[274052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:45:43 compute-0 sudo[274052]: pam_unix(sudo:session): session closed for user root
Nov 24 18:45:43 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v895: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:43 compute-0 sudo[274077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:45:43 compute-0 sudo[274077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:45:43 compute-0 sudo[274077]: pam_unix(sudo:session): session closed for user root
Nov 24 18:45:43 compute-0 sshd-session[273793]: Connection closed by invalid user ftpuser 80.94.95.115 port 44320 [preauth]
Nov 24 18:45:44 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:45:44 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:45:44 compute-0 ceph-mon[74927]: pgmap v895: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:45 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v896: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:46 compute-0 ceph-mon[74927]: pgmap v896: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:47 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v897: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:45:48 compute-0 ceph-mon[74927]: pgmap v897: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:49 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v898: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:50 compute-0 ceph-mon[74927]: pgmap v898: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:51 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v899: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:52 compute-0 ceph-mon[74927]: pgmap v899: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:45:53 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v900: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:54 compute-0 ceph-mon[74927]: pgmap v900: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:55 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v901: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:56 compute-0 ceph-mon[74927]: pgmap v901: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:57 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v902: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:45:58 compute-0 ceph-mon[74927]: pgmap v902: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:45:59 compute-0 podman[274102]: 2025-11-24 18:45:59.051984367 +0000 UTC m=+0.136103205 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 18:45:59 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v903: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:00 compute-0 ceph-mon[74927]: pgmap v903: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:00 compute-0 podman[274130]: 2025-11-24 18:46:00.981023722 +0000 UTC m=+0.070684911 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 24 18:46:00 compute-0 podman[274129]: 2025-11-24 18:46:00.981417081 +0000 UTC m=+0.065749204 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:46:01 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v904: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:46:02 compute-0 ceph-mon[74927]: pgmap v904: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:03 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v905: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:46:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:46:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:46:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:46:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:46:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:46:04 compute-0 ceph-mon[74927]: pgmap v905: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:05 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v906: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:06 compute-0 ceph-mon[74927]: pgmap v906: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:07 compute-0 nova_compute[270693]: 2025-11-24 18:46:07.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:46:07 compute-0 nova_compute[270693]: 2025-11-24 18:46:07.530 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 18:46:07 compute-0 nova_compute[270693]: 2025-11-24 18:46:07.530 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 18:46:07 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v907: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:07 compute-0 nova_compute[270693]: 2025-11-24 18:46:07.550 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 18:46:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:46:08 compute-0 nova_compute[270693]: 2025-11-24 18:46:08.528 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:46:08 compute-0 nova_compute[270693]: 2025-11-24 18:46:08.565 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:46:08 compute-0 nova_compute[270693]: 2025-11-24 18:46:08.565 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:46:08 compute-0 nova_compute[270693]: 2025-11-24 18:46:08.565 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:46:08 compute-0 nova_compute[270693]: 2025-11-24 18:46:08.566 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 18:46:08 compute-0 nova_compute[270693]: 2025-11-24 18:46:08.566 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:46:08 compute-0 ceph-mon[74927]: pgmap v907: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 18:46:09 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/731849222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:46:09 compute-0 nova_compute[270693]: 2025-11-24 18:46:09.035 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:46:09 compute-0 nova_compute[270693]: 2025-11-24 18:46:09.253 270697 WARNING nova.virt.libvirt.driver [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 18:46:09 compute-0 nova_compute[270693]: 2025-11-24 18:46:09.254 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5145MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 18:46:09 compute-0 nova_compute[270693]: 2025-11-24 18:46:09.255 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:46:09 compute-0 nova_compute[270693]: 2025-11-24 18:46:09.255 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:46:09 compute-0 nova_compute[270693]: 2025-11-24 18:46:09.323 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 18:46:09 compute-0 nova_compute[270693]: 2025-11-24 18:46:09.324 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 18:46:09 compute-0 nova_compute[270693]: 2025-11-24 18:46:09.341 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:46:09 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v908: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 18:46:09 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1106926518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:46:09 compute-0 nova_compute[270693]: 2025-11-24 18:46:09.776 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:46:09 compute-0 nova_compute[270693]: 2025-11-24 18:46:09.781 270697 DEBUG nova.compute.provider_tree [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed in ProviderTree for provider: d1cce7ec-de83-4810-91f8-1852891da8a6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 18:46:09 compute-0 nova_compute[270693]: 2025-11-24 18:46:09.801 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed for provider d1cce7ec-de83-4810-91f8-1852891da8a6 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 18:46:09 compute-0 nova_compute[270693]: 2025-11-24 18:46:09.803 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 18:46:09 compute-0 nova_compute[270693]: 2025-11-24 18:46:09.803 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:46:09 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/731849222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:46:09 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1106926518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:46:10 compute-0 nova_compute[270693]: 2025-11-24 18:46:10.804 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:46:10 compute-0 nova_compute[270693]: 2025-11-24 18:46:10.804 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:46:10 compute-0 nova_compute[270693]: 2025-11-24 18:46:10.804 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:46:10 compute-0 nova_compute[270693]: 2025-11-24 18:46:10.805 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:46:10 compute-0 nova_compute[270693]: 2025-11-24 18:46:10.805 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:46:10 compute-0 nova_compute[270693]: 2025-11-24 18:46:10.805 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 18:46:10 compute-0 ceph-mon[74927]: pgmap v908: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:11 compute-0 nova_compute[270693]: 2025-11-24 18:46:11.524 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:46:11 compute-0 nova_compute[270693]: 2025-11-24 18:46:11.528 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:46:11 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v909: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:46:12 compute-0 ceph-mon[74927]: pgmap v909: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:13 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v910: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:14 compute-0 ceph-mon[74927]: pgmap v910: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:15 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v911: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:16 compute-0 ceph-mon[74927]: pgmap v911: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:17 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v912: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:46:17 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Nov 24 18:46:17 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:46:17.840633) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 18:46:17 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Nov 24 18:46:17 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009977840719, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 770, "num_deletes": 257, "total_data_size": 960747, "memory_usage": 974856, "flush_reason": "Manual Compaction"}
Nov 24 18:46:17 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Nov 24 18:46:17 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009977853968, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 951989, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18736, "largest_seqno": 19505, "table_properties": {"data_size": 948092, "index_size": 1677, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8356, "raw_average_key_size": 18, "raw_value_size": 940170, "raw_average_value_size": 2052, "num_data_blocks": 76, "num_entries": 458, "num_filter_entries": 458, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764009912, "oldest_key_time": 1764009912, "file_creation_time": 1764009977, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Nov 24 18:46:17 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 13364 microseconds, and 4161 cpu microseconds.
Nov 24 18:46:17 compute-0 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 18:46:17 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:46:17.854011) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 951989 bytes OK
Nov 24 18:46:17 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:46:17.854029) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Nov 24 18:46:17 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:46:17.855681) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Nov 24 18:46:17 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:46:17.855695) EVENT_LOG_v1 {"time_micros": 1764009977855691, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 18:46:17 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:46:17.855713) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 18:46:17 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 956828, prev total WAL file size 956828, number of live WAL files 2.
Nov 24 18:46:17 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:46:17 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:46:17.856336) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353034' seq:0, type:0; will stop at (end)
Nov 24 18:46:17 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 18:46:17 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(929KB)], [44(6083KB)]
Nov 24 18:46:17 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009977856390, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 7181640, "oldest_snapshot_seqno": -1}
Nov 24 18:46:17 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4131 keys, 7043795 bytes, temperature: kUnknown
Nov 24 18:46:17 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009977902950, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7043795, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7015662, "index_size": 16695, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10373, "raw_key_size": 102387, "raw_average_key_size": 24, "raw_value_size": 6940305, "raw_average_value_size": 1680, "num_data_blocks": 702, "num_entries": 4131, "num_filter_entries": 4131, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008324, "oldest_key_time": 0, "file_creation_time": 1764009977, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Nov 24 18:46:17 compute-0 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 18:46:17 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:46:17.903154) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7043795 bytes
Nov 24 18:46:17 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:46:17.904630) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.1 rd, 151.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 5.9 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(14.9) write-amplify(7.4) OK, records in: 4657, records dropped: 526 output_compression: NoCompression
Nov 24 18:46:17 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:46:17.904646) EVENT_LOG_v1 {"time_micros": 1764009977904638, "job": 22, "event": "compaction_finished", "compaction_time_micros": 46616, "compaction_time_cpu_micros": 19118, "output_level": 6, "num_output_files": 1, "total_output_size": 7043795, "num_input_records": 4657, "num_output_records": 4131, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 18:46:17 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:46:17 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009977904876, "job": 22, "event": "table_file_deletion", "file_number": 46}
Nov 24 18:46:17 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:46:17 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009977905795, "job": 22, "event": "table_file_deletion", "file_number": 44}
Nov 24 18:46:17 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:46:17.856240) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:46:17 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:46:17.905937) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:46:17 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:46:17.905947) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:46:17 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:46:17.905951) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:46:17 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:46:17.905955) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:46:17 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:46:17.905959) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:46:19 compute-0 ceph-mon[74927]: pgmap v912: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 18:46:19 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2597587750' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 18:46:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 18:46:19 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2597587750' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 18:46:19 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v913: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:20 compute-0 ceph-mon[74927]: from='client.? 192.168.122.10:0/2597587750' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 18:46:20 compute-0 ceph-mon[74927]: from='client.? 192.168.122.10:0/2597587750' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 18:46:21 compute-0 ceph-mon[74927]: pgmap v913: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:21 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v914: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:46:22.740 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:46:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:46:22.740 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:46:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:46:22.741 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:46:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:46:23 compute-0 ceph-mon[74927]: pgmap v914: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:23 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v915: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:25 compute-0 ceph-mon[74927]: pgmap v915: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:25 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v916: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:27 compute-0 ceph-mon[74927]: pgmap v916: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:27 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v917: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:46:29 compute-0 ceph-mon[74927]: pgmap v917: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:29 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v918: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:30 compute-0 podman[274213]: 2025-11-24 18:46:30.029924735 +0000 UTC m=+0.114230539 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 18:46:31 compute-0 ceph-mon[74927]: pgmap v918: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:31 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v919: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:31 compute-0 podman[274239]: 2025-11-24 18:46:31.963198315 +0000 UTC m=+0.059574401 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, io.buildah.version=1.41.3)
Nov 24 18:46:31 compute-0 podman[274240]: 2025-11-24 18:46:31.994159433 +0000 UTC m=+0.077538971 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3)
Nov 24 18:46:32 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:46:33 compute-0 ceph-mon[74927]: pgmap v919: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:33 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v920: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:46:34
Nov 24 18:46:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:46:34 compute-0 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 18:46:34 compute-0 ceph-mgr[75218]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'backups', '.rgw.root', 'vms', 'default.rgw.log', '.mgr', 'default.rgw.control', 'default.rgw.meta']
Nov 24 18:46:34 compute-0 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 18:46:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:46:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:46:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:46:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:46:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:46:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:46:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:46:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:46:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:46:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:46:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:46:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:46:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:46:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:46:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:46:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:46:35 compute-0 ceph-mon[74927]: pgmap v920: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:35 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v921: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:37 compute-0 ceph-mon[74927]: pgmap v921: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:37 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v922: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:46:39 compute-0 ceph-mon[74927]: pgmap v922: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:39 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v923: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:41 compute-0 ceph-mon[74927]: pgmap v923: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:41 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v924: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:46:43 compute-0 ceph-mon[74927]: pgmap v924: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:46:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:46:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:46:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:46:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:46:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:46:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:46:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:46:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:46:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:46:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:46:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:46:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 18:46:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:46:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:46:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:46:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 18:46:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:46:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 18:46:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:46:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:46:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:46:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 18:46:43 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v925: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:43 compute-0 sudo[274278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:46:43 compute-0 sudo[274278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:46:43 compute-0 sudo[274278]: pam_unix(sudo:session): session closed for user root
Nov 24 18:46:43 compute-0 sudo[274303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:46:43 compute-0 sudo[274303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:46:43 compute-0 sudo[274303]: pam_unix(sudo:session): session closed for user root
Nov 24 18:46:43 compute-0 sudo[274328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:46:43 compute-0 sudo[274328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:46:43 compute-0 sudo[274328]: pam_unix(sudo:session): session closed for user root
Nov 24 18:46:43 compute-0 sudo[274353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 18:46:43 compute-0 sudo[274353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:46:44 compute-0 sudo[274353]: pam_unix(sudo:session): session closed for user root
Nov 24 18:46:44 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:46:44 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:46:44 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:46:44 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:46:44 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:46:44 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:46:44 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 7f44ac36-4f13-4351-a42c-cb69328a91e6 does not exist
Nov 24 18:46:44 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev d260362f-74ad-431f-8abc-e579c7260a56 does not exist
Nov 24 18:46:44 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 8cdba7ff-6240-4a44-adc6-61ea24add6de does not exist
Nov 24 18:46:44 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 18:46:44 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:46:44 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 18:46:44 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:46:44 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:46:44 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:46:44 compute-0 sudo[274410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:46:44 compute-0 sudo[274410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:46:44 compute-0 sudo[274410]: pam_unix(sudo:session): session closed for user root
Nov 24 18:46:44 compute-0 sudo[274435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:46:44 compute-0 sudo[274435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:46:44 compute-0 sudo[274435]: pam_unix(sudo:session): session closed for user root
Nov 24 18:46:44 compute-0 sudo[274460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:46:44 compute-0 sudo[274460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:46:44 compute-0 sudo[274460]: pam_unix(sudo:session): session closed for user root
Nov 24 18:46:44 compute-0 sudo[274485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 18:46:44 compute-0 sudo[274485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:46:44 compute-0 podman[274549]: 2025-11-24 18:46:44.941293425 +0000 UTC m=+0.048216902 container create 4cffcd97e3e589f5813fe0fd154c29a7efccd57c01c317a38e07cecdf22fa255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:46:44 compute-0 systemd[1]: Started libpod-conmon-4cffcd97e3e589f5813fe0fd154c29a7efccd57c01c317a38e07cecdf22fa255.scope.
Nov 24 18:46:45 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:46:45 compute-0 podman[274549]: 2025-11-24 18:46:44.919696696 +0000 UTC m=+0.026620183 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:46:45 compute-0 podman[274549]: 2025-11-24 18:46:45.016365714 +0000 UTC m=+0.123289181 container init 4cffcd97e3e589f5813fe0fd154c29a7efccd57c01c317a38e07cecdf22fa255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dirac, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:46:45 compute-0 podman[274549]: 2025-11-24 18:46:45.027743173 +0000 UTC m=+0.134666660 container start 4cffcd97e3e589f5813fe0fd154c29a7efccd57c01c317a38e07cecdf22fa255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 18:46:45 compute-0 podman[274549]: 2025-11-24 18:46:45.032087789 +0000 UTC m=+0.139011236 container attach 4cffcd97e3e589f5813fe0fd154c29a7efccd57c01c317a38e07cecdf22fa255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dirac, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 18:46:45 compute-0 hungry_dirac[274565]: 167 167
Nov 24 18:46:45 compute-0 systemd[1]: libpod-4cffcd97e3e589f5813fe0fd154c29a7efccd57c01c317a38e07cecdf22fa255.scope: Deactivated successfully.
Nov 24 18:46:45 compute-0 conmon[274565]: conmon 4cffcd97e3e589f5813f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4cffcd97e3e589f5813fe0fd154c29a7efccd57c01c317a38e07cecdf22fa255.scope/container/memory.events
Nov 24 18:46:45 compute-0 podman[274570]: 2025-11-24 18:46:45.079276985 +0000 UTC m=+0.027815322 container died 4cffcd97e3e589f5813fe0fd154c29a7efccd57c01c317a38e07cecdf22fa255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dirac, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 24 18:46:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-869ca16263465559a0f21f82a5e996cc261ffdc1c0e74a0323dbbe8a7553b436-merged.mount: Deactivated successfully.
Nov 24 18:46:45 compute-0 podman[274570]: 2025-11-24 18:46:45.126395219 +0000 UTC m=+0.074933536 container remove 4cffcd97e3e589f5813fe0fd154c29a7efccd57c01c317a38e07cecdf22fa255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:46:45 compute-0 systemd[1]: libpod-conmon-4cffcd97e3e589f5813fe0fd154c29a7efccd57c01c317a38e07cecdf22fa255.scope: Deactivated successfully.
Nov 24 18:46:45 compute-0 ceph-mon[74927]: pgmap v925: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:45 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:46:45 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:46:45 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:46:45 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:46:45 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:46:45 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:46:45 compute-0 podman[274592]: 2025-11-24 18:46:45.335941073 +0000 UTC m=+0.052966719 container create ff6ef4af521549f08265d01f825b31a3813e18c3d1183cd0abe76e38959d5c4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lalande, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:46:45 compute-0 systemd[1]: Started libpod-conmon-ff6ef4af521549f08265d01f825b31a3813e18c3d1183cd0abe76e38959d5c4c.scope.
Nov 24 18:46:45 compute-0 podman[274592]: 2025-11-24 18:46:45.308707305 +0000 UTC m=+0.025733001 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:46:45 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:46:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/394577e4915ec5ba8603cf4bd1ed6a1684d68868a222ebe710116815c2b3159a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:46:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/394577e4915ec5ba8603cf4bd1ed6a1684d68868a222ebe710116815c2b3159a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:46:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/394577e4915ec5ba8603cf4bd1ed6a1684d68868a222ebe710116815c2b3159a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:46:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/394577e4915ec5ba8603cf4bd1ed6a1684d68868a222ebe710116815c2b3159a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:46:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/394577e4915ec5ba8603cf4bd1ed6a1684d68868a222ebe710116815c2b3159a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:46:45 compute-0 podman[274592]: 2025-11-24 18:46:45.427461125 +0000 UTC m=+0.144486751 container init ff6ef4af521549f08265d01f825b31a3813e18c3d1183cd0abe76e38959d5c4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lalande, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 18:46:45 compute-0 podman[274592]: 2025-11-24 18:46:45.44154849 +0000 UTC m=+0.158574096 container start ff6ef4af521549f08265d01f825b31a3813e18c3d1183cd0abe76e38959d5c4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lalande, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 24 18:46:45 compute-0 podman[274592]: 2025-11-24 18:46:45.445415715 +0000 UTC m=+0.162441361 container attach ff6ef4af521549f08265d01f825b31a3813e18c3d1183cd0abe76e38959d5c4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:46:45 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v926: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:46 compute-0 stoic_lalande[274608]: --> passed data devices: 0 physical, 3 LVM
Nov 24 18:46:46 compute-0 stoic_lalande[274608]: --> relative data size: 1.0
Nov 24 18:46:46 compute-0 stoic_lalande[274608]: --> All data devices are unavailable
Nov 24 18:46:46 compute-0 systemd[1]: libpod-ff6ef4af521549f08265d01f825b31a3813e18c3d1183cd0abe76e38959d5c4c.scope: Deactivated successfully.
Nov 24 18:46:46 compute-0 systemd[1]: libpod-ff6ef4af521549f08265d01f825b31a3813e18c3d1183cd0abe76e38959d5c4c.scope: Consumed 1.023s CPU time.
Nov 24 18:46:46 compute-0 podman[274592]: 2025-11-24 18:46:46.517371784 +0000 UTC m=+1.234397400 container died ff6ef4af521549f08265d01f825b31a3813e18c3d1183cd0abe76e38959d5c4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lalande, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:46:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-394577e4915ec5ba8603cf4bd1ed6a1684d68868a222ebe710116815c2b3159a-merged.mount: Deactivated successfully.
Nov 24 18:46:46 compute-0 podman[274592]: 2025-11-24 18:46:46.582742455 +0000 UTC m=+1.299768071 container remove ff6ef4af521549f08265d01f825b31a3813e18c3d1183cd0abe76e38959d5c4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lalande, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 24 18:46:46 compute-0 systemd[1]: libpod-conmon-ff6ef4af521549f08265d01f825b31a3813e18c3d1183cd0abe76e38959d5c4c.scope: Deactivated successfully.
Nov 24 18:46:46 compute-0 sudo[274485]: pam_unix(sudo:session): session closed for user root
Nov 24 18:46:46 compute-0 sudo[274651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:46:46 compute-0 sudo[274651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:46:46 compute-0 sudo[274651]: pam_unix(sudo:session): session closed for user root
Nov 24 18:46:46 compute-0 sudo[274676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:46:46 compute-0 sudo[274676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:46:46 compute-0 sudo[274676]: pam_unix(sudo:session): session closed for user root
Nov 24 18:46:46 compute-0 sudo[274701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:46:46 compute-0 sudo[274701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:46:46 compute-0 sudo[274701]: pam_unix(sudo:session): session closed for user root
Nov 24 18:46:46 compute-0 sudo[274726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- lvm list --format json
Nov 24 18:46:46 compute-0 sudo[274726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:46:47 compute-0 ceph-mon[74927]: pgmap v926: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:47 compute-0 podman[274790]: 2025-11-24 18:46:47.199191477 +0000 UTC m=+0.041839546 container create f54900a85880df485b37bcc7617e4e4f66dc08e7f66214549fea29ebd57caa01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_shirley, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:46:47 compute-0 systemd[1]: Started libpod-conmon-f54900a85880df485b37bcc7617e4e4f66dc08e7f66214549fea29ebd57caa01.scope.
Nov 24 18:46:47 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:46:47 compute-0 podman[274790]: 2025-11-24 18:46:47.272711678 +0000 UTC m=+0.115359807 container init f54900a85880df485b37bcc7617e4e4f66dc08e7f66214549fea29ebd57caa01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_shirley, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:46:47 compute-0 podman[274790]: 2025-11-24 18:46:47.27810557 +0000 UTC m=+0.120753679 container start f54900a85880df485b37bcc7617e4e4f66dc08e7f66214549fea29ebd57caa01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:46:47 compute-0 podman[274790]: 2025-11-24 18:46:47.185295696 +0000 UTC m=+0.027943785 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:46:47 compute-0 podman[274790]: 2025-11-24 18:46:47.282122838 +0000 UTC m=+0.124771007 container attach f54900a85880df485b37bcc7617e4e4f66dc08e7f66214549fea29ebd57caa01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 18:46:47 compute-0 sleepy_shirley[274806]: 167 167
Nov 24 18:46:47 compute-0 systemd[1]: libpod-f54900a85880df485b37bcc7617e4e4f66dc08e7f66214549fea29ebd57caa01.scope: Deactivated successfully.
Nov 24 18:46:47 compute-0 podman[274790]: 2025-11-24 18:46:47.285748437 +0000 UTC m=+0.128396546 container died f54900a85880df485b37bcc7617e4e4f66dc08e7f66214549fea29ebd57caa01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:46:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ed4d6dcc735b443e0d1f8662a77c6560e3df1b7731878c36e1066eb8d9367c7-merged.mount: Deactivated successfully.
Nov 24 18:46:47 compute-0 podman[274790]: 2025-11-24 18:46:47.335215149 +0000 UTC m=+0.177863258 container remove f54900a85880df485b37bcc7617e4e4f66dc08e7f66214549fea29ebd57caa01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_shirley, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 18:46:47 compute-0 systemd[1]: libpod-conmon-f54900a85880df485b37bcc7617e4e4f66dc08e7f66214549fea29ebd57caa01.scope: Deactivated successfully.
Nov 24 18:46:47 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v927: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:47 compute-0 podman[274830]: 2025-11-24 18:46:47.571137549 +0000 UTC m=+0.055675595 container create 567cf9b1fcc1669011a8cad25c73a46133197c5dd7847ee3c054d31a6eb16e79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cerf, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:46:47 compute-0 systemd[1]: Started libpod-conmon-567cf9b1fcc1669011a8cad25c73a46133197c5dd7847ee3c054d31a6eb16e79.scope.
Nov 24 18:46:47 compute-0 podman[274830]: 2025-11-24 18:46:47.545616903 +0000 UTC m=+0.030154999 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:46:47 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:46:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93a5aecd8a923dfbe4a1e996eab4072052ba6a26c68ec39c21c2fae60a4ec8a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:46:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93a5aecd8a923dfbe4a1e996eab4072052ba6a26c68ec39c21c2fae60a4ec8a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:46:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93a5aecd8a923dfbe4a1e996eab4072052ba6a26c68ec39c21c2fae60a4ec8a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:46:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93a5aecd8a923dfbe4a1e996eab4072052ba6a26c68ec39c21c2fae60a4ec8a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:46:47 compute-0 podman[274830]: 2025-11-24 18:46:47.672806989 +0000 UTC m=+0.157345005 container init 567cf9b1fcc1669011a8cad25c73a46133197c5dd7847ee3c054d31a6eb16e79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cerf, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 18:46:47 compute-0 podman[274830]: 2025-11-24 18:46:47.679436892 +0000 UTC m=+0.163974908 container start 567cf9b1fcc1669011a8cad25c73a46133197c5dd7847ee3c054d31a6eb16e79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cerf, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Nov 24 18:46:47 compute-0 podman[274830]: 2025-11-24 18:46:47.682328262 +0000 UTC m=+0.166866268 container attach 567cf9b1fcc1669011a8cad25c73a46133197c5dd7847ee3c054d31a6eb16e79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cerf, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:46:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]: {
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:     "0": [
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:         {
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:             "devices": [
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:                 "/dev/loop3"
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:             ],
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:             "lv_name": "ceph_lv0",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:             "lv_size": "21470642176",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:             "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:             "name": "ceph_lv0",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:             "tags": {
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:                 "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:                 "ceph.cluster_name": "ceph",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:                 "ceph.crush_device_class": "",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:                 "ceph.encrypted": "0",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:                 "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:                 "ceph.osd_id": "0",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:                 "ceph.type": "block",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:                 "ceph.vdo": "0"
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:             },
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:             "type": "block",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:             "vg_name": "ceph_vg0"
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:         }
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:     ],
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:     "1": [
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:         {
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:             "devices": [
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:                 "/dev/loop4"
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:             ],
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:             "lv_name": "ceph_lv1",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:             "lv_size": "21470642176",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:             "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:             "name": "ceph_lv1",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:             "tags": {
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:                 "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:                 "ceph.cluster_name": "ceph",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:                 "ceph.crush_device_class": "",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:                 "ceph.encrypted": "0",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:                 "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:                 "ceph.osd_id": "1",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:                 "ceph.type": "block",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:                 "ceph.vdo": "0"
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:             },
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:             "type": "block",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:             "vg_name": "ceph_vg1"
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:         }
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:     ],
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:     "2": [
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:         {
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:             "devices": [
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:                 "/dev/loop5"
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:             ],
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:             "lv_name": "ceph_lv2",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:             "lv_size": "21470642176",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:             "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:             "name": "ceph_lv2",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:             "tags": {
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:                 "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:                 "ceph.cluster_name": "ceph",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:                 "ceph.crush_device_class": "",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:                 "ceph.encrypted": "0",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:                 "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:                 "ceph.osd_id": "2",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:                 "ceph.type": "block",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:                 "ceph.vdo": "0"
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:             },
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:             "type": "block",
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:             "vg_name": "ceph_vg2"
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:         }
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]:     ]
Nov 24 18:46:48 compute-0 quizzical_cerf[274847]: }
Nov 24 18:46:48 compute-0 systemd[1]: libpod-567cf9b1fcc1669011a8cad25c73a46133197c5dd7847ee3c054d31a6eb16e79.scope: Deactivated successfully.
Nov 24 18:46:48 compute-0 podman[274830]: 2025-11-24 18:46:48.360786473 +0000 UTC m=+0.845324529 container died 567cf9b1fcc1669011a8cad25c73a46133197c5dd7847ee3c054d31a6eb16e79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:46:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-93a5aecd8a923dfbe4a1e996eab4072052ba6a26c68ec39c21c2fae60a4ec8a3-merged.mount: Deactivated successfully.
Nov 24 18:46:48 compute-0 podman[274830]: 2025-11-24 18:46:48.420925666 +0000 UTC m=+0.905463682 container remove 567cf9b1fcc1669011a8cad25c73a46133197c5dd7847ee3c054d31a6eb16e79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cerf, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:46:48 compute-0 systemd[1]: libpod-conmon-567cf9b1fcc1669011a8cad25c73a46133197c5dd7847ee3c054d31a6eb16e79.scope: Deactivated successfully.
Nov 24 18:46:48 compute-0 sudo[274726]: pam_unix(sudo:session): session closed for user root
Nov 24 18:46:48 compute-0 sudo[274866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:46:48 compute-0 sudo[274866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:46:48 compute-0 sudo[274866]: pam_unix(sudo:session): session closed for user root
Nov 24 18:46:48 compute-0 sudo[274891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:46:48 compute-0 sudo[274891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:46:48 compute-0 sudo[274891]: pam_unix(sudo:session): session closed for user root
Nov 24 18:46:48 compute-0 sudo[274916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:46:48 compute-0 sudo[274916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:46:48 compute-0 sudo[274916]: pam_unix(sudo:session): session closed for user root
Nov 24 18:46:48 compute-0 sudo[274941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- raw list --format json
Nov 24 18:46:48 compute-0 sudo[274941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:46:49 compute-0 podman[275006]: 2025-11-24 18:46:49.050642203 +0000 UTC m=+0.041488078 container create bd83a950f954b90e8dd8723408e998c6d56ae3a5e6a8c21b421030ec3cdae2fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_northcutt, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:46:49 compute-0 systemd[1]: Started libpod-conmon-bd83a950f954b90e8dd8723408e998c6d56ae3a5e6a8c21b421030ec3cdae2fe.scope.
Nov 24 18:46:49 compute-0 podman[275006]: 2025-11-24 18:46:49.034570349 +0000 UTC m=+0.025416244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:46:49 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:46:49 compute-0 podman[275006]: 2025-11-24 18:46:49.15052831 +0000 UTC m=+0.141374285 container init bd83a950f954b90e8dd8723408e998c6d56ae3a5e6a8c21b421030ec3cdae2fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:46:49 compute-0 podman[275006]: 2025-11-24 18:46:49.161416157 +0000 UTC m=+0.152262072 container start bd83a950f954b90e8dd8723408e998c6d56ae3a5e6a8c21b421030ec3cdae2fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:46:49 compute-0 podman[275006]: 2025-11-24 18:46:49.165520747 +0000 UTC m=+0.156366662 container attach bd83a950f954b90e8dd8723408e998c6d56ae3a5e6a8c21b421030ec3cdae2fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 24 18:46:49 compute-0 cranky_northcutt[275023]: 167 167
Nov 24 18:46:49 compute-0 systemd[1]: libpod-bd83a950f954b90e8dd8723408e998c6d56ae3a5e6a8c21b421030ec3cdae2fe.scope: Deactivated successfully.
Nov 24 18:46:49 compute-0 conmon[275023]: conmon bd83a950f954b90e8dd8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bd83a950f954b90e8dd8723408e998c6d56ae3a5e6a8c21b421030ec3cdae2fe.scope/container/memory.events
Nov 24 18:46:49 compute-0 podman[275006]: 2025-11-24 18:46:49.169855843 +0000 UTC m=+0.160701758 container died bd83a950f954b90e8dd8723408e998c6d56ae3a5e6a8c21b421030ec3cdae2fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:46:49 compute-0 ceph-mon[74927]: pgmap v927: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-433157aaf25ede8b015efd89bcef81b4ae5cc4d1ad5d1f50f446de74755bce25-merged.mount: Deactivated successfully.
Nov 24 18:46:49 compute-0 podman[275006]: 2025-11-24 18:46:49.223155589 +0000 UTC m=+0.214001504 container remove bd83a950f954b90e8dd8723408e998c6d56ae3a5e6a8c21b421030ec3cdae2fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 24 18:46:49 compute-0 systemd[1]: libpod-conmon-bd83a950f954b90e8dd8723408e998c6d56ae3a5e6a8c21b421030ec3cdae2fe.scope: Deactivated successfully.
Nov 24 18:46:49 compute-0 podman[275050]: 2025-11-24 18:46:49.395404078 +0000 UTC m=+0.043228000 container create d2c276ec93ddf45ac491bb801a5af11849fde3590791da5b1f37b8016ee8b600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:46:49 compute-0 systemd[1]: Started libpod-conmon-d2c276ec93ddf45ac491bb801a5af11849fde3590791da5b1f37b8016ee8b600.scope.
Nov 24 18:46:49 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:46:49 compute-0 podman[275050]: 2025-11-24 18:46:49.379463257 +0000 UTC m=+0.027287189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:46:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc1182e9da104e2a353b61fbe05cf4d9359ac601737ae2a5d418ee8821fc169a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:46:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc1182e9da104e2a353b61fbe05cf4d9359ac601737ae2a5d418ee8821fc169a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:46:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc1182e9da104e2a353b61fbe05cf4d9359ac601737ae2a5d418ee8821fc169a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:46:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc1182e9da104e2a353b61fbe05cf4d9359ac601737ae2a5d418ee8821fc169a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:46:49 compute-0 podman[275050]: 2025-11-24 18:46:49.495349446 +0000 UTC m=+0.143173398 container init d2c276ec93ddf45ac491bb801a5af11849fde3590791da5b1f37b8016ee8b600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_faraday, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 18:46:49 compute-0 podman[275050]: 2025-11-24 18:46:49.503554867 +0000 UTC m=+0.151378819 container start d2c276ec93ddf45ac491bb801a5af11849fde3590791da5b1f37b8016ee8b600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_faraday, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 18:46:49 compute-0 podman[275050]: 2025-11-24 18:46:49.507803761 +0000 UTC m=+0.155627703 container attach d2c276ec93ddf45ac491bb801a5af11849fde3590791da5b1f37b8016ee8b600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:46:49 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v928: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:50 compute-0 zealous_faraday[275066]: {
Nov 24 18:46:50 compute-0 zealous_faraday[275066]:     "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 18:46:50 compute-0 zealous_faraday[275066]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:46:50 compute-0 zealous_faraday[275066]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 18:46:50 compute-0 zealous_faraday[275066]:         "osd_id": 0,
Nov 24 18:46:50 compute-0 zealous_faraday[275066]:         "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:46:50 compute-0 zealous_faraday[275066]:         "type": "bluestore"
Nov 24 18:46:50 compute-0 zealous_faraday[275066]:     },
Nov 24 18:46:50 compute-0 zealous_faraday[275066]:     "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 18:46:50 compute-0 zealous_faraday[275066]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:46:50 compute-0 zealous_faraday[275066]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 18:46:50 compute-0 zealous_faraday[275066]:         "osd_id": 1,
Nov 24 18:46:50 compute-0 zealous_faraday[275066]:         "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:46:50 compute-0 zealous_faraday[275066]:         "type": "bluestore"
Nov 24 18:46:50 compute-0 zealous_faraday[275066]:     },
Nov 24 18:46:50 compute-0 zealous_faraday[275066]:     "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 18:46:50 compute-0 zealous_faraday[275066]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:46:50 compute-0 zealous_faraday[275066]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 18:46:50 compute-0 zealous_faraday[275066]:         "osd_id": 2,
Nov 24 18:46:50 compute-0 zealous_faraday[275066]:         "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:46:50 compute-0 zealous_faraday[275066]:         "type": "bluestore"
Nov 24 18:46:50 compute-0 zealous_faraday[275066]:     }
Nov 24 18:46:50 compute-0 zealous_faraday[275066]: }
Nov 24 18:46:50 compute-0 systemd[1]: libpod-d2c276ec93ddf45ac491bb801a5af11849fde3590791da5b1f37b8016ee8b600.scope: Deactivated successfully.
Nov 24 18:46:50 compute-0 systemd[1]: libpod-d2c276ec93ddf45ac491bb801a5af11849fde3590791da5b1f37b8016ee8b600.scope: Consumed 1.035s CPU time.
Nov 24 18:46:50 compute-0 podman[275050]: 2025-11-24 18:46:50.532206847 +0000 UTC m=+1.180030809 container died d2c276ec93ddf45ac491bb801a5af11849fde3590791da5b1f37b8016ee8b600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_faraday, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 24 18:46:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc1182e9da104e2a353b61fbe05cf4d9359ac601737ae2a5d418ee8821fc169a-merged.mount: Deactivated successfully.
Nov 24 18:46:50 compute-0 podman[275050]: 2025-11-24 18:46:50.604154389 +0000 UTC m=+1.251978331 container remove d2c276ec93ddf45ac491bb801a5af11849fde3590791da5b1f37b8016ee8b600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:46:50 compute-0 systemd[1]: libpod-conmon-d2c276ec93ddf45ac491bb801a5af11849fde3590791da5b1f37b8016ee8b600.scope: Deactivated successfully.
Nov 24 18:46:50 compute-0 sudo[274941]: pam_unix(sudo:session): session closed for user root
Nov 24 18:46:50 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:46:50 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:46:50 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:46:50 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:46:50 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 5bc0f3db-cdd1-4e18-9695-3b48440967db does not exist
Nov 24 18:46:50 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 3267ae93-6de4-4a5a-bae6-62003889b6bf does not exist
Nov 24 18:46:50 compute-0 sudo[275112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:46:50 compute-0 sudo[275112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:46:50 compute-0 sudo[275112]: pam_unix(sudo:session): session closed for user root
Nov 24 18:46:50 compute-0 sudo[275137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:46:50 compute-0 sudo[275137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:46:50 compute-0 sudo[275137]: pam_unix(sudo:session): session closed for user root
Nov 24 18:46:51 compute-0 ceph-mon[74927]: pgmap v928: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:51 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:46:51 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:46:51 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v929: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:46:53 compute-0 ceph-mon[74927]: pgmap v929: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:53 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v930: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:55 compute-0 ceph-mon[74927]: pgmap v930: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:55 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v931: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:57 compute-0 ceph-mon[74927]: pgmap v931: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:57 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v932: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:46:59 compute-0 ceph-mon[74927]: pgmap v932: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:46:59 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v933: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:01 compute-0 podman[275162]: 2025-11-24 18:47:01.048740595 +0000 UTC m=+0.128419387 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Nov 24 18:47:01 compute-0 ceph-mon[74927]: pgmap v933: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:01 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v934: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:47:02 compute-0 podman[275188]: 2025-11-24 18:47:02.978168952 +0000 UTC m=+0.065823324 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible)
Nov 24 18:47:03 compute-0 podman[275187]: 2025-11-24 18:47:03.003719678 +0000 UTC m=+0.087493255 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 18:47:03 compute-0 ceph-mon[74927]: pgmap v934: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:03 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v935: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:47:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:47:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:47:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:47:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:47:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:47:05 compute-0 ceph-mon[74927]: pgmap v935: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:05 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v936: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:07 compute-0 ceph-mon[74927]: pgmap v936: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:07 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v937: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:47:08 compute-0 nova_compute[270693]: 2025-11-24 18:47:08.528 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:47:08 compute-0 nova_compute[270693]: 2025-11-24 18:47:08.563 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:47:08 compute-0 nova_compute[270693]: 2025-11-24 18:47:08.564 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:47:08 compute-0 nova_compute[270693]: 2025-11-24 18:47:08.564 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:47:08 compute-0 nova_compute[270693]: 2025-11-24 18:47:08.564 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 18:47:08 compute-0 nova_compute[270693]: 2025-11-24 18:47:08.564 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:47:08 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 18:47:08 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1631865295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:47:09 compute-0 nova_compute[270693]: 2025-11-24 18:47:08.999 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:47:09 compute-0 nova_compute[270693]: 2025-11-24 18:47:09.143 270697 WARNING nova.virt.libvirt.driver [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 18:47:09 compute-0 nova_compute[270693]: 2025-11-24 18:47:09.145 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5156MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 18:47:09 compute-0 nova_compute[270693]: 2025-11-24 18:47:09.145 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:47:09 compute-0 nova_compute[270693]: 2025-11-24 18:47:09.145 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:47:09 compute-0 nova_compute[270693]: 2025-11-24 18:47:09.232 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 18:47:09 compute-0 nova_compute[270693]: 2025-11-24 18:47:09.233 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 18:47:09 compute-0 nova_compute[270693]: 2025-11-24 18:47:09.256 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:47:09 compute-0 ceph-mon[74927]: pgmap v937: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:09 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1631865295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:47:09 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v938: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 18:47:09 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2054809592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:47:09 compute-0 nova_compute[270693]: 2025-11-24 18:47:09.653 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.397s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:47:09 compute-0 nova_compute[270693]: 2025-11-24 18:47:09.658 270697 DEBUG nova.compute.provider_tree [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed in ProviderTree for provider: d1cce7ec-de83-4810-91f8-1852891da8a6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 18:47:09 compute-0 nova_compute[270693]: 2025-11-24 18:47:09.681 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed for provider d1cce7ec-de83-4810-91f8-1852891da8a6 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 18:47:09 compute-0 nova_compute[270693]: 2025-11-24 18:47:09.683 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 18:47:09 compute-0 nova_compute[270693]: 2025-11-24 18:47:09.684 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:47:10 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2054809592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:47:10 compute-0 nova_compute[270693]: 2025-11-24 18:47:10.684 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:47:10 compute-0 nova_compute[270693]: 2025-11-24 18:47:10.685 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 18:47:10 compute-0 nova_compute[270693]: 2025-11-24 18:47:10.685 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 18:47:10 compute-0 nova_compute[270693]: 2025-11-24 18:47:10.703 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 18:47:10 compute-0 nova_compute[270693]: 2025-11-24 18:47:10.704 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:47:10 compute-0 nova_compute[270693]: 2025-11-24 18:47:10.704 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:47:10 compute-0 nova_compute[270693]: 2025-11-24 18:47:10.704 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 18:47:11 compute-0 ceph-mon[74927]: pgmap v938: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:11 compute-0 nova_compute[270693]: 2025-11-24 18:47:11.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:47:11 compute-0 nova_compute[270693]: 2025-11-24 18:47:11.557 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:47:11 compute-0 nova_compute[270693]: 2025-11-24 18:47:11.557 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:47:11 compute-0 nova_compute[270693]: 2025-11-24 18:47:11.558 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:47:11 compute-0 nova_compute[270693]: 2025-11-24 18:47:11.558 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:47:11 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v939: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:12 compute-0 nova_compute[270693]: 2025-11-24 18:47:12.553 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:47:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:47:13 compute-0 ceph-mon[74927]: pgmap v939: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:13 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v940: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:15 compute-0 ceph-mon[74927]: pgmap v940: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:15 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v941: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:17 compute-0 ceph-mon[74927]: pgmap v941: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:17 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v942: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:47:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 18:47:19 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1443428077' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 18:47:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 18:47:19 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1443428077' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 18:47:19 compute-0 ceph-mon[74927]: pgmap v942: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:19 compute-0 ceph-mon[74927]: from='client.? 192.168.122.10:0/1443428077' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 18:47:19 compute-0 ceph-mon[74927]: from='client.? 192.168.122.10:0/1443428077' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 18:47:19 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v943: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:21 compute-0 ceph-mon[74927]: pgmap v943: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:21 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v944: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:47:22.741 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:47:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:47:22.741 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:47:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:47:22.742 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:47:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:47:23 compute-0 ceph-mon[74927]: pgmap v944: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:23 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v945: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:25 compute-0 ceph-mon[74927]: pgmap v945: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:25 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v946: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:27 compute-0 ceph-mon[74927]: pgmap v946: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:27 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v947: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:47:29 compute-0 ceph-mon[74927]: pgmap v947: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:29 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v948: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:31 compute-0 ceph-mon[74927]: pgmap v948: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:31 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v949: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:32 compute-0 podman[275270]: 2025-11-24 18:47:32.007865963 +0000 UTC m=+0.107070834 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 18:47:32 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:47:33 compute-0 ceph-mon[74927]: pgmap v949: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:33 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v950: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:33 compute-0 podman[275296]: 2025-11-24 18:47:33.997960064 +0000 UTC m=+0.084195693 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 24 18:47:34 compute-0 podman[275297]: 2025-11-24 18:47:34.002884965 +0000 UTC m=+0.083267441 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 24 18:47:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:47:34
Nov 24 18:47:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:47:34 compute-0 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 18:47:34 compute-0 ceph-mgr[75218]: [balancer INFO root] pools ['.rgw.root', 'vms', '.mgr', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'images', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta']
Nov 24 18:47:34 compute-0 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 18:47:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:47:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:47:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:47:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:47:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:47:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:47:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:47:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:47:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:47:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:47:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:47:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:47:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:47:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:47:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:47:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:47:35 compute-0 ceph-mon[74927]: pgmap v950: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:35 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v951: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:37 compute-0 ceph-mon[74927]: pgmap v951: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:37 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v952: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:47:39 compute-0 ceph-mon[74927]: pgmap v952: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:39 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v953: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:41 compute-0 ceph-mon[74927]: pgmap v953: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:41 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v954: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:47:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:47:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:47:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:47:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:47:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:47:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:47:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:47:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:47:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:47:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:47:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:47:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:47:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 18:47:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:47:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:47:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:47:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 18:47:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:47:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 18:47:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:47:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:47:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:47:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 18:47:43 compute-0 ceph-mon[74927]: pgmap v954: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:43 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v955: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:45 compute-0 ceph-mon[74927]: pgmap v955: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:45 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v956: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:47 compute-0 ceph-mon[74927]: pgmap v956: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:47 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v957: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:47:48 compute-0 ceph-mon[74927]: pgmap v957: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:49 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v958: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:50 compute-0 ceph-mon[74927]: pgmap v958: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:50 compute-0 sudo[275337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:47:50 compute-0 sudo[275337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:47:50 compute-0 sudo[275337]: pam_unix(sudo:session): session closed for user root
Nov 24 18:47:50 compute-0 sudo[275362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:47:50 compute-0 sudo[275362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:47:50 compute-0 sudo[275362]: pam_unix(sudo:session): session closed for user root
Nov 24 18:47:51 compute-0 sudo[275387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:47:51 compute-0 sudo[275387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:47:51 compute-0 sudo[275387]: pam_unix(sudo:session): session closed for user root
Nov 24 18:47:51 compute-0 sudo[275412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 18:47:51 compute-0 sudo[275412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:47:51 compute-0 sudo[275412]: pam_unix(sudo:session): session closed for user root
Nov 24 18:47:51 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v959: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:51 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:47:51 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:47:51 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:47:51 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:47:51 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:47:51 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:47:51 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 2bbf5b0e-dbe5-4aaa-a5be-0f45f3768da4 does not exist
Nov 24 18:47:51 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 39dc976b-c550-4157-8d56-cae56d63f086 does not exist
Nov 24 18:47:51 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 7ba17519-e429-452a-9ac2-2dc99223ca5d does not exist
Nov 24 18:47:51 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 18:47:51 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:47:51 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 18:47:51 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:47:51 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:47:51 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:47:51 compute-0 sudo[275468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:47:51 compute-0 sudo[275468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:47:51 compute-0 sudo[275468]: pam_unix(sudo:session): session closed for user root
Nov 24 18:47:51 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:47:51 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:47:51 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:47:51 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:47:51 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:47:51 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:47:51 compute-0 sudo[275493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:47:51 compute-0 sudo[275493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:47:51 compute-0 sudo[275493]: pam_unix(sudo:session): session closed for user root
Nov 24 18:47:51 compute-0 sudo[275518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:47:51 compute-0 sudo[275518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:47:51 compute-0 sudo[275518]: pam_unix(sudo:session): session closed for user root
Nov 24 18:47:51 compute-0 sudo[275543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 18:47:51 compute-0 sudo[275543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:47:52 compute-0 podman[275610]: 2025-11-24 18:47:52.175466323 +0000 UTC m=+0.055581872 container create 11d3a1bd020e83d749587432ca795023e6780fe2e426042bc556af7339ffc706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_nightingale, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:47:52 compute-0 systemd[1]: Started libpod-conmon-11d3a1bd020e83d749587432ca795023e6780fe2e426042bc556af7339ffc706.scope.
Nov 24 18:47:52 compute-0 podman[275610]: 2025-11-24 18:47:52.145699724 +0000 UTC m=+0.025815333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:47:52 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:47:52 compute-0 podman[275610]: 2025-11-24 18:47:52.263859649 +0000 UTC m=+0.143975228 container init 11d3a1bd020e83d749587432ca795023e6780fe2e426042bc556af7339ffc706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_nightingale, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:47:52 compute-0 podman[275610]: 2025-11-24 18:47:52.2724628 +0000 UTC m=+0.152578339 container start 11d3a1bd020e83d749587432ca795023e6780fe2e426042bc556af7339ffc706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_nightingale, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:47:52 compute-0 podman[275610]: 2025-11-24 18:47:52.278460887 +0000 UTC m=+0.158576436 container attach 11d3a1bd020e83d749587432ca795023e6780fe2e426042bc556af7339ffc706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 24 18:47:52 compute-0 festive_nightingale[275627]: 167 167
Nov 24 18:47:52 compute-0 systemd[1]: libpod-11d3a1bd020e83d749587432ca795023e6780fe2e426042bc556af7339ffc706.scope: Deactivated successfully.
Nov 24 18:47:52 compute-0 podman[275610]: 2025-11-24 18:47:52.280382464 +0000 UTC m=+0.160498033 container died 11d3a1bd020e83d749587432ca795023e6780fe2e426042bc556af7339ffc706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:47:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-5210a4946a693fd5d7537e4dfa2ec16e7b2bbfa30651cd4023f9421f8a08882c-merged.mount: Deactivated successfully.
Nov 24 18:47:52 compute-0 podman[275610]: 2025-11-24 18:47:52.345703904 +0000 UTC m=+0.225819453 container remove 11d3a1bd020e83d749587432ca795023e6780fe2e426042bc556af7339ffc706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_nightingale, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 18:47:52 compute-0 systemd[1]: libpod-conmon-11d3a1bd020e83d749587432ca795023e6780fe2e426042bc556af7339ffc706.scope: Deactivated successfully.
Nov 24 18:47:52 compute-0 podman[275653]: 2025-11-24 18:47:52.531399853 +0000 UTC m=+0.054735702 container create ca4a7bfda837fe6c2bcad798594c9af36744b0107137498ce4ea5609a7fc22c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lalande, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 18:47:52 compute-0 systemd[1]: Started libpod-conmon-ca4a7bfda837fe6c2bcad798594c9af36744b0107137498ce4ea5609a7fc22c7.scope.
Nov 24 18:47:52 compute-0 podman[275653]: 2025-11-24 18:47:52.505858767 +0000 UTC m=+0.029194696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:47:52 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:47:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a3c5345c9b14e4d969500fa4a8904b3209718a183aacbafd18f5e5697990f48/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:47:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a3c5345c9b14e4d969500fa4a8904b3209718a183aacbafd18f5e5697990f48/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:47:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a3c5345c9b14e4d969500fa4a8904b3209718a183aacbafd18f5e5697990f48/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:47:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a3c5345c9b14e4d969500fa4a8904b3209718a183aacbafd18f5e5697990f48/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:47:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a3c5345c9b14e4d969500fa4a8904b3209718a183aacbafd18f5e5697990f48/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:47:52 compute-0 podman[275653]: 2025-11-24 18:47:52.626639086 +0000 UTC m=+0.149974925 container init ca4a7bfda837fe6c2bcad798594c9af36744b0107137498ce4ea5609a7fc22c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lalande, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:47:52 compute-0 podman[275653]: 2025-11-24 18:47:52.634526099 +0000 UTC m=+0.157861938 container start ca4a7bfda837fe6c2bcad798594c9af36744b0107137498ce4ea5609a7fc22c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lalande, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 24 18:47:52 compute-0 podman[275653]: 2025-11-24 18:47:52.64067899 +0000 UTC m=+0.164014849 container attach ca4a7bfda837fe6c2bcad798594c9af36744b0107137498ce4ea5609a7fc22c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 18:47:52 compute-0 ceph-mon[74927]: pgmap v959: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:47:53 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v960: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:53 compute-0 sharp_lalande[275670]: --> passed data devices: 0 physical, 3 LVM
Nov 24 18:47:53 compute-0 sharp_lalande[275670]: --> relative data size: 1.0
Nov 24 18:47:53 compute-0 sharp_lalande[275670]: --> All data devices are unavailable
Nov 24 18:47:53 compute-0 systemd[1]: libpod-ca4a7bfda837fe6c2bcad798594c9af36744b0107137498ce4ea5609a7fc22c7.scope: Deactivated successfully.
Nov 24 18:47:53 compute-0 podman[275699]: 2025-11-24 18:47:53.695215884 +0000 UTC m=+0.022982744 container died ca4a7bfda837fe6c2bcad798594c9af36744b0107137498ce4ea5609a7fc22c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:47:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a3c5345c9b14e4d969500fa4a8904b3209718a183aacbafd18f5e5697990f48-merged.mount: Deactivated successfully.
Nov 24 18:47:53 compute-0 podman[275699]: 2025-11-24 18:47:53.773521271 +0000 UTC m=+0.101288121 container remove ca4a7bfda837fe6c2bcad798594c9af36744b0107137498ce4ea5609a7fc22c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:47:53 compute-0 systemd[1]: libpod-conmon-ca4a7bfda837fe6c2bcad798594c9af36744b0107137498ce4ea5609a7fc22c7.scope: Deactivated successfully.
Nov 24 18:47:53 compute-0 sudo[275543]: pam_unix(sudo:session): session closed for user root
Nov 24 18:47:53 compute-0 sudo[275714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:47:53 compute-0 sudo[275714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:47:53 compute-0 sudo[275714]: pam_unix(sudo:session): session closed for user root
Nov 24 18:47:53 compute-0 sudo[275739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:47:53 compute-0 sudo[275739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:47:53 compute-0 sudo[275739]: pam_unix(sudo:session): session closed for user root
Nov 24 18:47:53 compute-0 sudo[275764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:47:53 compute-0 sudo[275764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:47:53 compute-0 sudo[275764]: pam_unix(sudo:session): session closed for user root
Nov 24 18:47:54 compute-0 sudo[275789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- lvm list --format json
Nov 24 18:47:54 compute-0 sudo[275789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:47:54 compute-0 podman[275854]: 2025-11-24 18:47:54.346781294 +0000 UTC m=+0.043296882 container create 9b96c6637a749fed46e4b873e4d6caec3138a5aa55e77b68ca2bc5c22c68b3d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jackson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:47:54 compute-0 systemd[1]: Started libpod-conmon-9b96c6637a749fed46e4b873e4d6caec3138a5aa55e77b68ca2bc5c22c68b3d7.scope.
Nov 24 18:47:54 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:47:54 compute-0 podman[275854]: 2025-11-24 18:47:54.329024609 +0000 UTC m=+0.025540197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:47:54 compute-0 podman[275854]: 2025-11-24 18:47:54.433304494 +0000 UTC m=+0.129820122 container init 9b96c6637a749fed46e4b873e4d6caec3138a5aa55e77b68ca2bc5c22c68b3d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:47:54 compute-0 podman[275854]: 2025-11-24 18:47:54.439055125 +0000 UTC m=+0.135570713 container start 9b96c6637a749fed46e4b873e4d6caec3138a5aa55e77b68ca2bc5c22c68b3d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 24 18:47:54 compute-0 charming_jackson[275870]: 167 167
Nov 24 18:47:54 compute-0 systemd[1]: libpod-9b96c6637a749fed46e4b873e4d6caec3138a5aa55e77b68ca2bc5c22c68b3d7.scope: Deactivated successfully.
Nov 24 18:47:54 compute-0 podman[275854]: 2025-11-24 18:47:54.446369134 +0000 UTC m=+0.142884752 container attach 9b96c6637a749fed46e4b873e4d6caec3138a5aa55e77b68ca2bc5c22c68b3d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jackson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 18:47:54 compute-0 podman[275854]: 2025-11-24 18:47:54.447361909 +0000 UTC m=+0.143877517 container died 9b96c6637a749fed46e4b873e4d6caec3138a5aa55e77b68ca2bc5c22c68b3d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:47:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-18a68cd7205a37b0a8795a12bd922e1db515f270cb150dbdc81df080c9a58ca8-merged.mount: Deactivated successfully.
Nov 24 18:47:54 compute-0 podman[275854]: 2025-11-24 18:47:54.493697514 +0000 UTC m=+0.190213082 container remove 9b96c6637a749fed46e4b873e4d6caec3138a5aa55e77b68ca2bc5c22c68b3d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jackson, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:47:54 compute-0 systemd[1]: libpod-conmon-9b96c6637a749fed46e4b873e4d6caec3138a5aa55e77b68ca2bc5c22c68b3d7.scope: Deactivated successfully.
Nov 24 18:47:54 compute-0 podman[275894]: 2025-11-24 18:47:54.655920097 +0000 UTC m=+0.045626019 container create afb9892bded1b569c755cf8319eb9f615007928d15183885d3c544875548da67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 18:47:54 compute-0 systemd[1]: Started libpod-conmon-afb9892bded1b569c755cf8319eb9f615007928d15183885d3c544875548da67.scope.
Nov 24 18:47:54 compute-0 ceph-mon[74927]: pgmap v960: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:54 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:47:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3d989ed263637aba0c419ebf96249474ecc3453c5c254f0f02076e18ab1685d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:47:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3d989ed263637aba0c419ebf96249474ecc3453c5c254f0f02076e18ab1685d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:47:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3d989ed263637aba0c419ebf96249474ecc3453c5c254f0f02076e18ab1685d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:47:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3d989ed263637aba0c419ebf96249474ecc3453c5c254f0f02076e18ab1685d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:47:54 compute-0 podman[275894]: 2025-11-24 18:47:54.635353773 +0000 UTC m=+0.025059725 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:47:54 compute-0 podman[275894]: 2025-11-24 18:47:54.741491753 +0000 UTC m=+0.131197675 container init afb9892bded1b569c755cf8319eb9f615007928d15183885d3c544875548da67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:47:54 compute-0 podman[275894]: 2025-11-24 18:47:54.748787842 +0000 UTC m=+0.138493744 container start afb9892bded1b569c755cf8319eb9f615007928d15183885d3c544875548da67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_panini, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 24 18:47:54 compute-0 podman[275894]: 2025-11-24 18:47:54.754517122 +0000 UTC m=+0.144223034 container attach afb9892bded1b569c755cf8319eb9f615007928d15183885d3c544875548da67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:47:55 compute-0 funny_panini[275910]: {
Nov 24 18:47:55 compute-0 funny_panini[275910]:     "0": [
Nov 24 18:47:55 compute-0 funny_panini[275910]:         {
Nov 24 18:47:55 compute-0 funny_panini[275910]:             "devices": [
Nov 24 18:47:55 compute-0 funny_panini[275910]:                 "/dev/loop3"
Nov 24 18:47:55 compute-0 funny_panini[275910]:             ],
Nov 24 18:47:55 compute-0 funny_panini[275910]:             "lv_name": "ceph_lv0",
Nov 24 18:47:55 compute-0 funny_panini[275910]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:47:55 compute-0 funny_panini[275910]:             "lv_size": "21470642176",
Nov 24 18:47:55 compute-0 funny_panini[275910]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:47:55 compute-0 funny_panini[275910]:             "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:47:55 compute-0 funny_panini[275910]:             "name": "ceph_lv0",
Nov 24 18:47:55 compute-0 funny_panini[275910]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:47:55 compute-0 funny_panini[275910]:             "tags": {
Nov 24 18:47:55 compute-0 funny_panini[275910]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:47:55 compute-0 funny_panini[275910]:                 "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:47:55 compute-0 funny_panini[275910]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:47:55 compute-0 funny_panini[275910]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:47:55 compute-0 funny_panini[275910]:                 "ceph.cluster_name": "ceph",
Nov 24 18:47:55 compute-0 funny_panini[275910]:                 "ceph.crush_device_class": "",
Nov 24 18:47:55 compute-0 funny_panini[275910]:                 "ceph.encrypted": "0",
Nov 24 18:47:55 compute-0 funny_panini[275910]:                 "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:47:55 compute-0 funny_panini[275910]:                 "ceph.osd_id": "0",
Nov 24 18:47:55 compute-0 funny_panini[275910]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:47:55 compute-0 funny_panini[275910]:                 "ceph.type": "block",
Nov 24 18:47:55 compute-0 funny_panini[275910]:                 "ceph.vdo": "0"
Nov 24 18:47:55 compute-0 funny_panini[275910]:             },
Nov 24 18:47:55 compute-0 funny_panini[275910]:             "type": "block",
Nov 24 18:47:55 compute-0 funny_panini[275910]:             "vg_name": "ceph_vg0"
Nov 24 18:47:55 compute-0 funny_panini[275910]:         }
Nov 24 18:47:55 compute-0 funny_panini[275910]:     ],
Nov 24 18:47:55 compute-0 funny_panini[275910]:     "1": [
Nov 24 18:47:55 compute-0 funny_panini[275910]:         {
Nov 24 18:47:55 compute-0 funny_panini[275910]:             "devices": [
Nov 24 18:47:55 compute-0 funny_panini[275910]:                 "/dev/loop4"
Nov 24 18:47:55 compute-0 funny_panini[275910]:             ],
Nov 24 18:47:55 compute-0 funny_panini[275910]:             "lv_name": "ceph_lv1",
Nov 24 18:47:55 compute-0 funny_panini[275910]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:47:55 compute-0 funny_panini[275910]:             "lv_size": "21470642176",
Nov 24 18:47:55 compute-0 funny_panini[275910]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:47:55 compute-0 funny_panini[275910]:             "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:47:55 compute-0 funny_panini[275910]:             "name": "ceph_lv1",
Nov 24 18:47:55 compute-0 funny_panini[275910]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:47:55 compute-0 funny_panini[275910]:             "tags": {
Nov 24 18:47:55 compute-0 funny_panini[275910]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:47:55 compute-0 funny_panini[275910]:                 "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:47:55 compute-0 funny_panini[275910]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:47:55 compute-0 funny_panini[275910]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:47:55 compute-0 funny_panini[275910]:                 "ceph.cluster_name": "ceph",
Nov 24 18:47:55 compute-0 funny_panini[275910]:                 "ceph.crush_device_class": "",
Nov 24 18:47:55 compute-0 funny_panini[275910]:                 "ceph.encrypted": "0",
Nov 24 18:47:55 compute-0 funny_panini[275910]:                 "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:47:55 compute-0 funny_panini[275910]:                 "ceph.osd_id": "1",
Nov 24 18:47:55 compute-0 funny_panini[275910]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:47:55 compute-0 funny_panini[275910]:                 "ceph.type": "block",
Nov 24 18:47:55 compute-0 funny_panini[275910]:                 "ceph.vdo": "0"
Nov 24 18:47:55 compute-0 funny_panini[275910]:             },
Nov 24 18:47:55 compute-0 funny_panini[275910]:             "type": "block",
Nov 24 18:47:55 compute-0 funny_panini[275910]:             "vg_name": "ceph_vg1"
Nov 24 18:47:55 compute-0 funny_panini[275910]:         }
Nov 24 18:47:55 compute-0 funny_panini[275910]:     ],
Nov 24 18:47:55 compute-0 funny_panini[275910]:     "2": [
Nov 24 18:47:55 compute-0 funny_panini[275910]:         {
Nov 24 18:47:55 compute-0 funny_panini[275910]:             "devices": [
Nov 24 18:47:55 compute-0 funny_panini[275910]:                 "/dev/loop5"
Nov 24 18:47:55 compute-0 funny_panini[275910]:             ],
Nov 24 18:47:55 compute-0 funny_panini[275910]:             "lv_name": "ceph_lv2",
Nov 24 18:47:55 compute-0 funny_panini[275910]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:47:55 compute-0 funny_panini[275910]:             "lv_size": "21470642176",
Nov 24 18:47:55 compute-0 funny_panini[275910]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:47:55 compute-0 funny_panini[275910]:             "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:47:55 compute-0 funny_panini[275910]:             "name": "ceph_lv2",
Nov 24 18:47:55 compute-0 funny_panini[275910]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:47:55 compute-0 funny_panini[275910]:             "tags": {
Nov 24 18:47:55 compute-0 funny_panini[275910]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:47:55 compute-0 funny_panini[275910]:                 "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:47:55 compute-0 funny_panini[275910]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:47:55 compute-0 funny_panini[275910]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:47:55 compute-0 funny_panini[275910]:                 "ceph.cluster_name": "ceph",
Nov 24 18:47:55 compute-0 funny_panini[275910]:                 "ceph.crush_device_class": "",
Nov 24 18:47:55 compute-0 funny_panini[275910]:                 "ceph.encrypted": "0",
Nov 24 18:47:55 compute-0 funny_panini[275910]:                 "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:47:55 compute-0 funny_panini[275910]:                 "ceph.osd_id": "2",
Nov 24 18:47:55 compute-0 funny_panini[275910]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:47:55 compute-0 funny_panini[275910]:                 "ceph.type": "block",
Nov 24 18:47:55 compute-0 funny_panini[275910]:                 "ceph.vdo": "0"
Nov 24 18:47:55 compute-0 funny_panini[275910]:             },
Nov 24 18:47:55 compute-0 funny_panini[275910]:             "type": "block",
Nov 24 18:47:55 compute-0 funny_panini[275910]:             "vg_name": "ceph_vg2"
Nov 24 18:47:55 compute-0 funny_panini[275910]:         }
Nov 24 18:47:55 compute-0 funny_panini[275910]:     ]
Nov 24 18:47:55 compute-0 funny_panini[275910]: }
Nov 24 18:47:55 compute-0 systemd[1]: libpod-afb9892bded1b569c755cf8319eb9f615007928d15183885d3c544875548da67.scope: Deactivated successfully.
Nov 24 18:47:55 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v961: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:55 compute-0 podman[275919]: 2025-11-24 18:47:55.610990234 +0000 UTC m=+0.028670794 container died afb9892bded1b569c755cf8319eb9f615007928d15183885d3c544875548da67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 18:47:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3d989ed263637aba0c419ebf96249474ecc3453c5c254f0f02076e18ab1685d-merged.mount: Deactivated successfully.
Nov 24 18:47:55 compute-0 podman[275919]: 2025-11-24 18:47:55.686636547 +0000 UTC m=+0.104317017 container remove afb9892bded1b569c755cf8319eb9f615007928d15183885d3c544875548da67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_panini, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:47:55 compute-0 systemd[1]: libpod-conmon-afb9892bded1b569c755cf8319eb9f615007928d15183885d3c544875548da67.scope: Deactivated successfully.
Nov 24 18:47:55 compute-0 sudo[275789]: pam_unix(sudo:session): session closed for user root
Nov 24 18:47:55 compute-0 sudo[275934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:47:55 compute-0 sudo[275934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:47:55 compute-0 sudo[275934]: pam_unix(sudo:session): session closed for user root
Nov 24 18:47:55 compute-0 sudo[275959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:47:55 compute-0 sudo[275959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:47:55 compute-0 sudo[275959]: pam_unix(sudo:session): session closed for user root
Nov 24 18:47:55 compute-0 sudo[275984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:47:55 compute-0 sudo[275984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:47:55 compute-0 sudo[275984]: pam_unix(sudo:session): session closed for user root
Nov 24 18:47:56 compute-0 sudo[276009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- raw list --format json
Nov 24 18:47:56 compute-0 sudo[276009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:47:56 compute-0 podman[276075]: 2025-11-24 18:47:56.436121247 +0000 UTC m=+0.039642112 container create 561f0090314ebbb4573a959b3101dbb4dc24e7839fe23afc10ea7c1112aa4f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:47:56 compute-0 systemd[1]: Started libpod-conmon-561f0090314ebbb4573a959b3101dbb4dc24e7839fe23afc10ea7c1112aa4f06.scope.
Nov 24 18:47:56 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:47:56 compute-0 podman[276075]: 2025-11-24 18:47:56.415380489 +0000 UTC m=+0.018901384 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:47:56 compute-0 podman[276075]: 2025-11-24 18:47:56.517342307 +0000 UTC m=+0.120863172 container init 561f0090314ebbb4573a959b3101dbb4dc24e7839fe23afc10ea7c1112aa4f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_torvalds, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:47:56 compute-0 podman[276075]: 2025-11-24 18:47:56.525904137 +0000 UTC m=+0.129425002 container start 561f0090314ebbb4573a959b3101dbb4dc24e7839fe23afc10ea7c1112aa4f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_torvalds, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 18:47:56 compute-0 systemd[1]: libpod-561f0090314ebbb4573a959b3101dbb4dc24e7839fe23afc10ea7c1112aa4f06.scope: Deactivated successfully.
Nov 24 18:47:56 compute-0 zen_torvalds[276091]: 167 167
Nov 24 18:47:56 compute-0 conmon[276091]: conmon 561f0090314ebbb4573a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-561f0090314ebbb4573a959b3101dbb4dc24e7839fe23afc10ea7c1112aa4f06.scope/container/memory.events
Nov 24 18:47:56 compute-0 podman[276075]: 2025-11-24 18:47:56.532205911 +0000 UTC m=+0.135726796 container attach 561f0090314ebbb4573a959b3101dbb4dc24e7839fe23afc10ea7c1112aa4f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:47:56 compute-0 podman[276075]: 2025-11-24 18:47:56.534131928 +0000 UTC m=+0.137652793 container died 561f0090314ebbb4573a959b3101dbb4dc24e7839fe23afc10ea7c1112aa4f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 24 18:47:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1a12956c49a1b3e2c720124cddeec24412385fcf3ea95a793d2b4a93f981bad-merged.mount: Deactivated successfully.
Nov 24 18:47:56 compute-0 podman[276075]: 2025-11-24 18:47:56.581053288 +0000 UTC m=+0.184574163 container remove 561f0090314ebbb4573a959b3101dbb4dc24e7839fe23afc10ea7c1112aa4f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:47:56 compute-0 systemd[1]: libpod-conmon-561f0090314ebbb4573a959b3101dbb4dc24e7839fe23afc10ea7c1112aa4f06.scope: Deactivated successfully.
Nov 24 18:47:56 compute-0 ceph-mon[74927]: pgmap v961: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:56 compute-0 podman[276115]: 2025-11-24 18:47:56.778991046 +0000 UTC m=+0.050647151 container create 4319552c569141070f2f3d67ac699f5b1853602014ded94b8dcd5c8cdecdfd59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_villani, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 18:47:56 compute-0 systemd[1]: Started libpod-conmon-4319552c569141070f2f3d67ac699f5b1853602014ded94b8dcd5c8cdecdfd59.scope.
Nov 24 18:47:56 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:47:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc2145403a77db46ce83b4943ea24551903c29c0ee71e30671186a562d00524b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:47:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc2145403a77db46ce83b4943ea24551903c29c0ee71e30671186a562d00524b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:47:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc2145403a77db46ce83b4943ea24551903c29c0ee71e30671186a562d00524b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:47:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc2145403a77db46ce83b4943ea24551903c29c0ee71e30671186a562d00524b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:47:56 compute-0 podman[276115]: 2025-11-24 18:47:56.759153511 +0000 UTC m=+0.030809606 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:47:56 compute-0 podman[276115]: 2025-11-24 18:47:56.871309838 +0000 UTC m=+0.142965923 container init 4319552c569141070f2f3d67ac699f5b1853602014ded94b8dcd5c8cdecdfd59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 24 18:47:56 compute-0 podman[276115]: 2025-11-24 18:47:56.87753716 +0000 UTC m=+0.149193225 container start 4319552c569141070f2f3d67ac699f5b1853602014ded94b8dcd5c8cdecdfd59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_villani, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 18:47:56 compute-0 podman[276115]: 2025-11-24 18:47:56.882852011 +0000 UTC m=+0.154508096 container attach 4319552c569141070f2f3d67ac699f5b1853602014ded94b8dcd5c8cdecdfd59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_villani, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 18:47:57 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v962: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:57 compute-0 distracted_villani[276131]: {
Nov 24 18:47:57 compute-0 distracted_villani[276131]:     "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 18:47:57 compute-0 distracted_villani[276131]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:47:57 compute-0 distracted_villani[276131]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 18:47:57 compute-0 distracted_villani[276131]:         "osd_id": 0,
Nov 24 18:47:57 compute-0 distracted_villani[276131]:         "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:47:57 compute-0 distracted_villani[276131]:         "type": "bluestore"
Nov 24 18:47:57 compute-0 distracted_villani[276131]:     },
Nov 24 18:47:57 compute-0 distracted_villani[276131]:     "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 18:47:57 compute-0 distracted_villani[276131]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:47:57 compute-0 distracted_villani[276131]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 18:47:57 compute-0 distracted_villani[276131]:         "osd_id": 1,
Nov 24 18:47:57 compute-0 distracted_villani[276131]:         "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:47:57 compute-0 distracted_villani[276131]:         "type": "bluestore"
Nov 24 18:47:57 compute-0 distracted_villani[276131]:     },
Nov 24 18:47:57 compute-0 distracted_villani[276131]:     "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 18:47:57 compute-0 distracted_villani[276131]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:47:57 compute-0 distracted_villani[276131]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 18:47:57 compute-0 distracted_villani[276131]:         "osd_id": 2,
Nov 24 18:47:57 compute-0 distracted_villani[276131]:         "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:47:57 compute-0 distracted_villani[276131]:         "type": "bluestore"
Nov 24 18:47:57 compute-0 distracted_villani[276131]:     }
Nov 24 18:47:57 compute-0 distracted_villani[276131]: }
Nov 24 18:47:57 compute-0 systemd[1]: libpod-4319552c569141070f2f3d67ac699f5b1853602014ded94b8dcd5c8cdecdfd59.scope: Deactivated successfully.
Nov 24 18:47:57 compute-0 podman[276115]: 2025-11-24 18:47:57.808343452 +0000 UTC m=+1.079999517 container died 4319552c569141070f2f3d67ac699f5b1853602014ded94b8dcd5c8cdecdfd59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_villani, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:47:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc2145403a77db46ce83b4943ea24551903c29c0ee71e30671186a562d00524b-merged.mount: Deactivated successfully.
Nov 24 18:47:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:47:57 compute-0 podman[276115]: 2025-11-24 18:47:57.871380076 +0000 UTC m=+1.143036161 container remove 4319552c569141070f2f3d67ac699f5b1853602014ded94b8dcd5c8cdecdfd59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_villani, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 18:47:57 compute-0 systemd[1]: libpod-conmon-4319552c569141070f2f3d67ac699f5b1853602014ded94b8dcd5c8cdecdfd59.scope: Deactivated successfully.
Nov 24 18:47:57 compute-0 sudo[276009]: pam_unix(sudo:session): session closed for user root
Nov 24 18:47:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:47:57 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:47:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:47:57 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:47:57 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 7585dbc7-da0a-4abf-804e-4b119b42dfe0 does not exist
Nov 24 18:47:57 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 9368a8b3-de26-46e3-9ceb-6567aefb39f4 does not exist
Nov 24 18:47:57 compute-0 sudo[276178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:47:57 compute-0 sudo[276178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:47:57 compute-0 sudo[276178]: pam_unix(sudo:session): session closed for user root
Nov 24 18:47:58 compute-0 sudo[276203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:47:58 compute-0 sudo[276203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:47:58 compute-0 sudo[276203]: pam_unix(sudo:session): session closed for user root
Nov 24 18:47:58 compute-0 ceph-mon[74927]: pgmap v962: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:47:58 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:47:58 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:47:59 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v963: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:48:00 compute-0 ceph-mon[74927]: pgmap v963: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:48:01 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v964: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:48:02 compute-0 ceph-mon[74927]: pgmap v964: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:48:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:48:03 compute-0 podman[276229]: 2025-11-24 18:48:03.029362713 +0000 UTC m=+0.117005217 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 18:48:03 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v965: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:48:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:48:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:48:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:48:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:48:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:48:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:48:04 compute-0 ceph-mon[74927]: pgmap v965: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:48:05 compute-0 podman[276255]: 2025-11-24 18:48:05.0188276 +0000 UTC m=+0.100816041 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Nov 24 18:48:05 compute-0 podman[276256]: 2025-11-24 18:48:05.01925204 +0000 UTC m=+0.103591689 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Nov 24 18:48:05 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v966: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:48:06 compute-0 ceph-mon[74927]: pgmap v966: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:48:07 compute-0 nova_compute[270693]: 2025-11-24 18:48:07.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:48:07 compute-0 nova_compute[270693]: 2025-11-24 18:48:07.529 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 24 18:48:07 compute-0 nova_compute[270693]: 2025-11-24 18:48:07.548 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 24 18:48:07 compute-0 nova_compute[270693]: 2025-11-24 18:48:07.548 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:48:07 compute-0 nova_compute[270693]: 2025-11-24 18:48:07.548 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 24 18:48:07 compute-0 nova_compute[270693]: 2025-11-24 18:48:07.561 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:48:07 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v967: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:48:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:48:08 compute-0 ceph-mon[74927]: pgmap v967: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:48:09 compute-0 nova_compute[270693]: 2025-11-24 18:48:09.572 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:48:09 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v968: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:48:09 compute-0 nova_compute[270693]: 2025-11-24 18:48:09.603 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:48:09 compute-0 nova_compute[270693]: 2025-11-24 18:48:09.604 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:48:09 compute-0 nova_compute[270693]: 2025-11-24 18:48:09.604 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:48:09 compute-0 nova_compute[270693]: 2025-11-24 18:48:09.604 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 18:48:09 compute-0 nova_compute[270693]: 2025-11-24 18:48:09.604 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:48:10 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 18:48:10 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2938684667' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:48:10 compute-0 nova_compute[270693]: 2025-11-24 18:48:10.051 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:48:10 compute-0 nova_compute[270693]: 2025-11-24 18:48:10.225 270697 WARNING nova.virt.libvirt.driver [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 18:48:10 compute-0 nova_compute[270693]: 2025-11-24 18:48:10.227 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5175MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 18:48:10 compute-0 nova_compute[270693]: 2025-11-24 18:48:10.227 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:48:10 compute-0 nova_compute[270693]: 2025-11-24 18:48:10.227 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:48:10 compute-0 nova_compute[270693]: 2025-11-24 18:48:10.466 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 18:48:10 compute-0 nova_compute[270693]: 2025-11-24 18:48:10.467 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 18:48:10 compute-0 nova_compute[270693]: 2025-11-24 18:48:10.547 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:48:10 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 18:48:10 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3472366357' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:48:10 compute-0 ceph-mon[74927]: pgmap v968: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:48:10 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2938684667' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:48:10 compute-0 nova_compute[270693]: 2025-11-24 18:48:10.976 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:48:10 compute-0 nova_compute[270693]: 2025-11-24 18:48:10.981 270697 DEBUG nova.compute.provider_tree [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed in ProviderTree for provider: d1cce7ec-de83-4810-91f8-1852891da8a6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 18:48:11 compute-0 nova_compute[270693]: 2025-11-24 18:48:11.001 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed for provider d1cce7ec-de83-4810-91f8-1852891da8a6 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 18:48:11 compute-0 nova_compute[270693]: 2025-11-24 18:48:11.002 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 18:48:11 compute-0 nova_compute[270693]: 2025-11-24 18:48:11.002 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:48:11 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v969: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:48:11 compute-0 nova_compute[270693]: 2025-11-24 18:48:11.960 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:48:11 compute-0 nova_compute[270693]: 2025-11-24 18:48:11.961 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 18:48:11 compute-0 nova_compute[270693]: 2025-11-24 18:48:11.961 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 18:48:11 compute-0 nova_compute[270693]: 2025-11-24 18:48:11.977 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 18:48:11 compute-0 nova_compute[270693]: 2025-11-24 18:48:11.978 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:48:11 compute-0 nova_compute[270693]: 2025-11-24 18:48:11.978 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:48:11 compute-0 nova_compute[270693]: 2025-11-24 18:48:11.978 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 18:48:12 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3472366357' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:48:12 compute-0 nova_compute[270693]: 2025-11-24 18:48:12.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:48:12 compute-0 nova_compute[270693]: 2025-11-24 18:48:12.530 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:48:12 compute-0 nova_compute[270693]: 2025-11-24 18:48:12.530 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:48:12 compute-0 nova_compute[270693]: 2025-11-24 18:48:12.530 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:48:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:48:13 compute-0 ceph-mon[74927]: pgmap v969: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:48:13 compute-0 nova_compute[270693]: 2025-11-24 18:48:13.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:48:13 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v970: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:48:14 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Nov 24 18:48:14 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Nov 24 18:48:14 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Nov 24 18:48:15 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Nov 24 18:48:15 compute-0 ceph-mon[74927]: pgmap v970: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:48:15 compute-0 ceph-mon[74927]: osdmap e121: 3 total, 3 up, 3 in
Nov 24 18:48:15 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Nov 24 18:48:15 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Nov 24 18:48:15 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v973: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 127 B/s wr, 0 op/s
Nov 24 18:48:16 compute-0 ceph-mon[74927]: osdmap e122: 3 total, 3 up, 3 in
Nov 24 18:48:17 compute-0 ceph-mon[74927]: pgmap v973: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 127 B/s wr, 0 op/s
Nov 24 18:48:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Nov 24 18:48:17 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v974: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 127 B/s wr, 0 op/s
Nov 24 18:48:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Nov 24 18:48:17 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Nov 24 18:48:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:48:18 compute-0 ceph-mon[74927]: pgmap v974: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 127 B/s wr, 0 op/s
Nov 24 18:48:18 compute-0 ceph-mon[74927]: osdmap e123: 3 total, 3 up, 3 in
Nov 24 18:48:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 18:48:19 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/822022556' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 18:48:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 18:48:19 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/822022556' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 18:48:19 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v976: 321 pgs: 321 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 3.4 MiB/s wr, 9 op/s
Nov 24 18:48:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Nov 24 18:48:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Nov 24 18:48:19 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Nov 24 18:48:19 compute-0 ceph-mon[74927]: from='client.? 192.168.122.10:0/822022556' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 18:48:19 compute-0 ceph-mon[74927]: from='client.? 192.168.122.10:0/822022556' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 18:48:20 compute-0 ceph-mon[74927]: pgmap v976: 321 pgs: 321 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 3.4 MiB/s wr, 9 op/s
Nov 24 18:48:20 compute-0 ceph-mon[74927]: osdmap e124: 3 total, 3 up, 3 in
Nov 24 18:48:21 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v978: 321 pgs: 321 active+clean; 37 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 5.9 MiB/s wr, 31 op/s
Nov 24 18:48:22 compute-0 ceph-mon[74927]: pgmap v978: 321 pgs: 321 active+clean; 37 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 5.9 MiB/s wr, 31 op/s
Nov 24 18:48:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:48:22.742 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:48:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:48:22.743 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:48:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:48:22.743 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:48:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:48:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Nov 24 18:48:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Nov 24 18:48:22 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Nov 24 18:48:23 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v980: 321 pgs: 321 active+clean; 37 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 6.1 MiB/s wr, 62 op/s
Nov 24 18:48:23 compute-0 ceph-mon[74927]: osdmap e125: 3 total, 3 up, 3 in
Nov 24 18:48:24 compute-0 ceph-mon[74927]: pgmap v980: 321 pgs: 321 active+clean; 37 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 6.1 MiB/s wr, 62 op/s
Nov 24 18:48:25 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v981: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.2 MiB/s wr, 48 op/s
Nov 24 18:48:26 compute-0 ceph-mon[74927]: pgmap v981: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.2 MiB/s wr, 48 op/s
Nov 24 18:48:27 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v982: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 2.6 MiB/s wr, 40 op/s
Nov 24 18:48:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:48:28 compute-0 ceph-mon[74927]: pgmap v982: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 2.6 MiB/s wr, 40 op/s
Nov 24 18:48:29 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v983: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.1 MiB/s wr, 32 op/s
Nov 24 18:48:30 compute-0 ceph-mon[74927]: pgmap v983: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.1 MiB/s wr, 32 op/s
Nov 24 18:48:31 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v984: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 456 KiB/s wr, 18 op/s
Nov 24 18:48:32 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:48:32 compute-0 ceph-mon[74927]: pgmap v984: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 456 KiB/s wr, 18 op/s
Nov 24 18:48:33 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v985: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 425 KiB/s wr, 17 op/s
Nov 24 18:48:34 compute-0 podman[276338]: 2025-11-24 18:48:34.028975691 +0000 UTC m=+0.113656805 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 24 18:48:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:48:34
Nov 24 18:48:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:48:34 compute-0 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 18:48:34 compute-0 ceph-mgr[75218]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'images', 'default.rgw.control', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'vms', 'backups']
Nov 24 18:48:34 compute-0 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 18:48:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:48:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:48:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:48:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:48:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:48:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:48:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:48:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:48:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:48:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:48:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:48:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:48:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:48:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:48:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:48:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:48:35 compute-0 ceph-mon[74927]: pgmap v985: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 425 KiB/s wr, 17 op/s
Nov 24 18:48:35 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v986: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 379 KiB/s wr, 0 op/s
Nov 24 18:48:35 compute-0 podman[276366]: 2025-11-24 18:48:35.957076755 +0000 UTC m=+0.048045698 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 18:48:36 compute-0 podman[276365]: 2025-11-24 18:48:36.003833311 +0000 UTC m=+0.088366726 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 24 18:48:37 compute-0 ceph-mon[74927]: pgmap v986: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 379 KiB/s wr, 0 op/s
Nov 24 18:48:37 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:48:37.594 179763 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:2b:64', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'fa:26:5b:32:fa:ba'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 18:48:37 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:48:37.595 179763 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 18:48:37 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v987: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:48:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:48:39 compute-0 ceph-mon[74927]: pgmap v987: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:48:39 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v988: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:48:41 compute-0 ceph-mon[74927]: pgmap v988: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:48:41 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v989: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:48:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:48:43 compute-0 ceph-mon[74927]: pgmap v989: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:48:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:48:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:48:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:48:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:48:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:48:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:48:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:48:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:48:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:48:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:48:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 18:48:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:48:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 18:48:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:48:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:48:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:48:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 18:48:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:48:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 18:48:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:48:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:48:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:48:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 18:48:43 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:48:43.596 179763 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=302e9f34-0427-4ff9-a29b-2fc7b5250666, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 18:48:43 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v990: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:48:44 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Nov 24 18:48:44 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:48:44.093322) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 18:48:44 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Nov 24 18:48:44 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010124093423, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1453, "num_deletes": 251, "total_data_size": 2265062, "memory_usage": 2311480, "flush_reason": "Manual Compaction"}
Nov 24 18:48:44 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Nov 24 18:48:44 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010124103562, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 2232086, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19506, "largest_seqno": 20958, "table_properties": {"data_size": 2225267, "index_size": 3954, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14135, "raw_average_key_size": 19, "raw_value_size": 2211508, "raw_average_value_size": 3114, "num_data_blocks": 180, "num_entries": 710, "num_filter_entries": 710, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764009977, "oldest_key_time": 1764009977, "file_creation_time": 1764010124, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Nov 24 18:48:44 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 10263 microseconds, and 4780 cpu microseconds.
Nov 24 18:48:44 compute-0 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 18:48:44 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:48:44.103597) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 2232086 bytes OK
Nov 24 18:48:44 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:48:44.103612) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Nov 24 18:48:44 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:48:44.105113) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Nov 24 18:48:44 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:48:44.105127) EVENT_LOG_v1 {"time_micros": 1764010124105123, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 18:48:44 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:48:44.105144) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 18:48:44 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2258660, prev total WAL file size 2258660, number of live WAL files 2.
Nov 24 18:48:44 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:48:44 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:48:44.105887) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Nov 24 18:48:44 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 18:48:44 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(2179KB)], [47(6878KB)]
Nov 24 18:48:44 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010124105935, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9275881, "oldest_snapshot_seqno": -1}
Nov 24 18:48:44 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4323 keys, 7503754 bytes, temperature: kUnknown
Nov 24 18:48:44 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010124138321, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7503754, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7473937, "index_size": 17931, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10821, "raw_key_size": 106987, "raw_average_key_size": 24, "raw_value_size": 7394663, "raw_average_value_size": 1710, "num_data_blocks": 752, "num_entries": 4323, "num_filter_entries": 4323, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008324, "oldest_key_time": 0, "file_creation_time": 1764010124, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Nov 24 18:48:44 compute-0 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 18:48:44 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:48:44.138540) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7503754 bytes
Nov 24 18:48:44 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:48:44.140013) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 285.7 rd, 231.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 6.7 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(7.5) write-amplify(3.4) OK, records in: 4841, records dropped: 518 output_compression: NoCompression
Nov 24 18:48:44 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:48:44.140056) EVENT_LOG_v1 {"time_micros": 1764010124140039, "job": 24, "event": "compaction_finished", "compaction_time_micros": 32467, "compaction_time_cpu_micros": 15064, "output_level": 6, "num_output_files": 1, "total_output_size": 7503754, "num_input_records": 4841, "num_output_records": 4323, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 18:48:44 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:48:44 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010124140597, "job": 24, "event": "table_file_deletion", "file_number": 49}
Nov 24 18:48:44 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:48:44 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010124141854, "job": 24, "event": "table_file_deletion", "file_number": 47}
Nov 24 18:48:44 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:48:44.105801) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:48:44 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:48:44.141942) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:48:44 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:48:44.141948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:48:44 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:48:44.141950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:48:44 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:48:44.141952) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:48:44 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:48:44.141955) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:48:45 compute-0 ceph-mon[74927]: pgmap v990: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:48:45 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v991: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:48:47 compute-0 ceph-mon[74927]: pgmap v991: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:48:47 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 18:48:47 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 4636 writes, 20K keys, 4636 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 4636 writes, 4636 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1317 writes, 5973 keys, 1317 commit groups, 1.0 writes per commit group, ingest: 8.63 MB, 0.01 MB/s
                                           Interval WAL: 1317 writes, 1317 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     87.5      0.28              0.07        12    0.023       0      0       0.0       0.0
                                             L6      1/0    7.16 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    203.6    166.8      0.46              0.21        11    0.042     48K   5780       0.0       0.0
                                            Sum      1/0    7.16 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2    127.3    137.0      0.73              0.28        23    0.032     48K   5780       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.0    149.5    151.4      0.30              0.13        10    0.030     23K   2583       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    203.6    166.8      0.46              0.21        11    0.042     48K   5780       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     87.9      0.27              0.07        11    0.025       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     28.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.024, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.10 GB write, 0.06 MB/s write, 0.09 GB read, 0.05 MB/s read, 0.7 seconds
                                           Interval compaction: 0.04 GB write, 0.08 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562af0cfd1f0#2 capacity: 308.00 MB usage: 8.61 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000109 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(559,8.22 MB,2.66877%) FilterBlock(24,142.36 KB,0.0451373%) IndexBlock(24,261.91 KB,0.0830415%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 24 18:48:47 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v992: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:48:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:48:49 compute-0 ceph-mon[74927]: pgmap v992: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:48:49 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v993: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:48:51 compute-0 ceph-mon[74927]: pgmap v993: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:48:51 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v994: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:48:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Nov 24 18:48:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Nov 24 18:48:52 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Nov 24 18:48:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:48:53 compute-0 ceph-mon[74927]: pgmap v994: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:48:53 compute-0 ceph-mon[74927]: osdmap e126: 3 total, 3 up, 3 in
Nov 24 18:48:53 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v996: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 818 B/s wr, 3 op/s
Nov 24 18:48:54 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Nov 24 18:48:54 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Nov 24 18:48:54 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Nov 24 18:48:55 compute-0 ceph-mon[74927]: pgmap v996: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 818 B/s wr, 3 op/s
Nov 24 18:48:55 compute-0 ceph-mon[74927]: osdmap e127: 3 total, 3 up, 3 in
Nov 24 18:48:55 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v998: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 3.0 KiB/s wr, 43 op/s
Nov 24 18:48:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Nov 24 18:48:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Nov 24 18:48:57 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Nov 24 18:48:57 compute-0 ceph-mon[74927]: pgmap v998: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 3.0 KiB/s wr, 43 op/s
Nov 24 18:48:57 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1000: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 4.0 KiB/s wr, 57 op/s
Nov 24 18:48:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:48:58 compute-0 sudo[276406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:48:58 compute-0 sudo[276406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:48:58 compute-0 sudo[276406]: pam_unix(sudo:session): session closed for user root
Nov 24 18:48:58 compute-0 ceph-mon[74927]: osdmap e128: 3 total, 3 up, 3 in
Nov 24 18:48:58 compute-0 sudo[276431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:48:58 compute-0 sudo[276431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:48:58 compute-0 sudo[276431]: pam_unix(sudo:session): session closed for user root
Nov 24 18:48:58 compute-0 sudo[276456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:48:58 compute-0 sudo[276456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:48:58 compute-0 sudo[276456]: pam_unix(sudo:session): session closed for user root
Nov 24 18:48:58 compute-0 sudo[276481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 18:48:58 compute-0 sudo[276481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:48:58 compute-0 sudo[276481]: pam_unix(sudo:session): session closed for user root
Nov 24 18:48:58 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:48:58 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:48:58 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:48:58 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:48:58 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:48:58 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:48:58 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 67fa1872-d787-46e5-9f24-0508d6ab481a does not exist
Nov 24 18:48:58 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev fda44082-1974-42bb-b95b-448633b57f7b does not exist
Nov 24 18:48:58 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 7a6b1194-d726-4849-b24d-27274a80cbd0 does not exist
Nov 24 18:48:58 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 18:48:58 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:48:58 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 18:48:58 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:48:58 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:48:58 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:48:59 compute-0 sudo[276537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:48:59 compute-0 sudo[276537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:48:59 compute-0 sudo[276537]: pam_unix(sudo:session): session closed for user root
Nov 24 18:48:59 compute-0 sudo[276562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:48:59 compute-0 sudo[276562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:48:59 compute-0 sudo[276562]: pam_unix(sudo:session): session closed for user root
Nov 24 18:48:59 compute-0 sudo[276587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:48:59 compute-0 sudo[276587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:48:59 compute-0 sudo[276587]: pam_unix(sudo:session): session closed for user root
Nov 24 18:48:59 compute-0 ceph-mon[74927]: pgmap v1000: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 4.0 KiB/s wr, 57 op/s
Nov 24 18:48:59 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:48:59 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:48:59 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:48:59 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:48:59 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:48:59 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:48:59 compute-0 sudo[276612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 18:48:59 compute-0 sudo[276612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:48:59 compute-0 podman[276676]: 2025-11-24 18:48:59.580498696 +0000 UTC m=+0.042431710 container create 96c5bf8411d32e72e56e1ba1676dcb7800e9fe95885d41ba0afcef641e3d7c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bassi, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 18:48:59 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1001: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 4.5 KiB/s wr, 54 op/s
Nov 24 18:48:59 compute-0 systemd[1]: Started libpod-conmon-96c5bf8411d32e72e56e1ba1676dcb7800e9fe95885d41ba0afcef641e3d7c38.scope.
Nov 24 18:48:59 compute-0 podman[276676]: 2025-11-24 18:48:59.563213713 +0000 UTC m=+0.025146707 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:48:59 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:48:59 compute-0 podman[276676]: 2025-11-24 18:48:59.676268633 +0000 UTC m=+0.138201617 container init 96c5bf8411d32e72e56e1ba1676dcb7800e9fe95885d41ba0afcef641e3d7c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bassi, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:48:59 compute-0 podman[276676]: 2025-11-24 18:48:59.683389887 +0000 UTC m=+0.145322871 container start 96c5bf8411d32e72e56e1ba1676dcb7800e9fe95885d41ba0afcef641e3d7c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bassi, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:48:59 compute-0 podman[276676]: 2025-11-24 18:48:59.686399361 +0000 UTC m=+0.148332355 container attach 96c5bf8411d32e72e56e1ba1676dcb7800e9fe95885d41ba0afcef641e3d7c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 18:48:59 compute-0 reverent_bassi[276692]: 167 167
Nov 24 18:48:59 compute-0 systemd[1]: libpod-96c5bf8411d32e72e56e1ba1676dcb7800e9fe95885d41ba0afcef641e3d7c38.scope: Deactivated successfully.
Nov 24 18:48:59 compute-0 conmon[276692]: conmon 96c5bf8411d32e72e56e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-96c5bf8411d32e72e56e1ba1676dcb7800e9fe95885d41ba0afcef641e3d7c38.scope/container/memory.events
Nov 24 18:48:59 compute-0 podman[276676]: 2025-11-24 18:48:59.693366212 +0000 UTC m=+0.155299226 container died 96c5bf8411d32e72e56e1ba1676dcb7800e9fe95885d41ba0afcef641e3d7c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:48:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-8779eaffac1f44e69a0399fc7c11bcb3ec844e443d322f526a6810a7bb0ae1f4-merged.mount: Deactivated successfully.
Nov 24 18:48:59 compute-0 podman[276676]: 2025-11-24 18:48:59.745288133 +0000 UTC m=+0.207221107 container remove 96c5bf8411d32e72e56e1ba1676dcb7800e9fe95885d41ba0afcef641e3d7c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bassi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 18:48:59 compute-0 systemd[1]: libpod-conmon-96c5bf8411d32e72e56e1ba1676dcb7800e9fe95885d41ba0afcef641e3d7c38.scope: Deactivated successfully.
Nov 24 18:49:00 compute-0 podman[276716]: 2025-11-24 18:49:00.008344898 +0000 UTC m=+0.057296155 container create 3f804521be453525acfd19eb8182916e8020068a4a1c099f8571d4babbb1b235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_moser, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:49:00 compute-0 systemd[1]: Started libpod-conmon-3f804521be453525acfd19eb8182916e8020068a4a1c099f8571d4babbb1b235.scope.
Nov 24 18:49:00 compute-0 podman[276716]: 2025-11-24 18:48:59.973271748 +0000 UTC m=+0.022223035 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:49:00 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:49:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b96162fea127591f44a7e1eebf12ef7eb3f97e5518e77c1b7e4a8fb5593bc7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:49:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b96162fea127591f44a7e1eebf12ef7eb3f97e5518e77c1b7e4a8fb5593bc7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:49:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b96162fea127591f44a7e1eebf12ef7eb3f97e5518e77c1b7e4a8fb5593bc7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:49:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b96162fea127591f44a7e1eebf12ef7eb3f97e5518e77c1b7e4a8fb5593bc7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:49:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b96162fea127591f44a7e1eebf12ef7eb3f97e5518e77c1b7e4a8fb5593bc7f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:49:00 compute-0 podman[276716]: 2025-11-24 18:49:00.099585413 +0000 UTC m=+0.148536700 container init 3f804521be453525acfd19eb8182916e8020068a4a1c099f8571d4babbb1b235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_moser, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:49:00 compute-0 podman[276716]: 2025-11-24 18:49:00.110959161 +0000 UTC m=+0.159910448 container start 3f804521be453525acfd19eb8182916e8020068a4a1c099f8571d4babbb1b235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_moser, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 24 18:49:00 compute-0 podman[276716]: 2025-11-24 18:49:00.115491332 +0000 UTC m=+0.164442689 container attach 3f804521be453525acfd19eb8182916e8020068a4a1c099f8571d4babbb1b235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_moser, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 24 18:49:01 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Nov 24 18:49:01 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Nov 24 18:49:01 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Nov 24 18:49:01 compute-0 ceph-mon[74927]: pgmap v1001: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 4.5 KiB/s wr, 54 op/s
Nov 24 18:49:01 compute-0 hungry_moser[276733]: --> passed data devices: 0 physical, 3 LVM
Nov 24 18:49:01 compute-0 hungry_moser[276733]: --> relative data size: 1.0
Nov 24 18:49:01 compute-0 hungry_moser[276733]: --> All data devices are unavailable
Nov 24 18:49:01 compute-0 systemd[1]: libpod-3f804521be453525acfd19eb8182916e8020068a4a1c099f8571d4babbb1b235.scope: Deactivated successfully.
Nov 24 18:49:01 compute-0 systemd[1]: libpod-3f804521be453525acfd19eb8182916e8020068a4a1c099f8571d4babbb1b235.scope: Consumed 1.104s CPU time.
Nov 24 18:49:01 compute-0 podman[276762]: 2025-11-24 18:49:01.319828156 +0000 UTC m=+0.033061301 container died 3f804521be453525acfd19eb8182916e8020068a4a1c099f8571d4babbb1b235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:49:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b96162fea127591f44a7e1eebf12ef7eb3f97e5518e77c1b7e4a8fb5593bc7f-merged.mount: Deactivated successfully.
Nov 24 18:49:01 compute-0 podman[276762]: 2025-11-24 18:49:01.380085072 +0000 UTC m=+0.093318227 container remove 3f804521be453525acfd19eb8182916e8020068a4a1c099f8571d4babbb1b235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_moser, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 18:49:01 compute-0 systemd[1]: libpod-conmon-3f804521be453525acfd19eb8182916e8020068a4a1c099f8571d4babbb1b235.scope: Deactivated successfully.
Nov 24 18:49:01 compute-0 sudo[276612]: pam_unix(sudo:session): session closed for user root
Nov 24 18:49:01 compute-0 sudo[276777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:49:01 compute-0 sudo[276777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:49:01 compute-0 sudo[276777]: pam_unix(sudo:session): session closed for user root
Nov 24 18:49:01 compute-0 sudo[276802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:49:01 compute-0 sudo[276802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:49:01 compute-0 sudo[276802]: pam_unix(sudo:session): session closed for user root
Nov 24 18:49:01 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1003: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 5.4 KiB/s wr, 69 op/s
Nov 24 18:49:01 compute-0 sudo[276827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:49:01 compute-0 sudo[276827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:49:01 compute-0 sudo[276827]: pam_unix(sudo:session): session closed for user root
Nov 24 18:49:01 compute-0 anacron[30860]: Job `cron.weekly' started
Nov 24 18:49:01 compute-0 anacron[30860]: Job `cron.weekly' terminated
Nov 24 18:49:01 compute-0 sudo[276852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- lvm list --format json
Nov 24 18:49:01 compute-0 sudo[276852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:49:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Nov 24 18:49:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Nov 24 18:49:02 compute-0 ceph-mon[74927]: osdmap e129: 3 total, 3 up, 3 in
Nov 24 18:49:02 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Nov 24 18:49:02 compute-0 podman[276921]: 2025-11-24 18:49:02.239757861 +0000 UTC m=+0.069211257 container create 889077d08510933ca34576b5b0c5675688826824d974f979b87aa6cdda54dfc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lamarr, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:49:02 compute-0 systemd[1]: Started libpod-conmon-889077d08510933ca34576b5b0c5675688826824d974f979b87aa6cdda54dfc3.scope.
Nov 24 18:49:02 compute-0 podman[276921]: 2025-11-24 18:49:02.211395726 +0000 UTC m=+0.040849112 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:49:02 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:49:02 compute-0 podman[276921]: 2025-11-24 18:49:02.342102608 +0000 UTC m=+0.171556014 container init 889077d08510933ca34576b5b0c5675688826824d974f979b87aa6cdda54dfc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 18:49:02 compute-0 podman[276921]: 2025-11-24 18:49:02.353295492 +0000 UTC m=+0.182748868 container start 889077d08510933ca34576b5b0c5675688826824d974f979b87aa6cdda54dfc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 18:49:02 compute-0 podman[276921]: 2025-11-24 18:49:02.356443629 +0000 UTC m=+0.185897015 container attach 889077d08510933ca34576b5b0c5675688826824d974f979b87aa6cdda54dfc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 24 18:49:02 compute-0 beautiful_lamarr[276937]: 167 167
Nov 24 18:49:02 compute-0 systemd[1]: libpod-889077d08510933ca34576b5b0c5675688826824d974f979b87aa6cdda54dfc3.scope: Deactivated successfully.
Nov 24 18:49:02 compute-0 podman[276921]: 2025-11-24 18:49:02.363923443 +0000 UTC m=+0.193376839 container died 889077d08510933ca34576b5b0c5675688826824d974f979b87aa6cdda54dfc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lamarr, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 24 18:49:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-413e631c339f8a43afa88cf0f4f2518b8d1792fa8c3487b84514b235516e88f0-merged.mount: Deactivated successfully.
Nov 24 18:49:02 compute-0 podman[276921]: 2025-11-24 18:49:02.404167228 +0000 UTC m=+0.233620584 container remove 889077d08510933ca34576b5b0c5675688826824d974f979b87aa6cdda54dfc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lamarr, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 24 18:49:02 compute-0 systemd[1]: libpod-conmon-889077d08510933ca34576b5b0c5675688826824d974f979b87aa6cdda54dfc3.scope: Deactivated successfully.
Nov 24 18:49:02 compute-0 podman[276962]: 2025-11-24 18:49:02.61526858 +0000 UTC m=+0.042676787 container create dedeb5966677fb9f2b8cc6ea03353cabb960259abac86b3a9e8a9cd0db567b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_galileo, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:49:02 compute-0 systemd[1]: Started libpod-conmon-dedeb5966677fb9f2b8cc6ea03353cabb960259abac86b3a9e8a9cd0db567b9e.scope.
Nov 24 18:49:02 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:49:02 compute-0 podman[276962]: 2025-11-24 18:49:02.597678769 +0000 UTC m=+0.025086986 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:49:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdddecb749b6298db63963f3e71c9a1fe6646f0b8546bf9b81b1d983f90d3f28/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:49:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdddecb749b6298db63963f3e71c9a1fe6646f0b8546bf9b81b1d983f90d3f28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:49:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdddecb749b6298db63963f3e71c9a1fe6646f0b8546bf9b81b1d983f90d3f28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:49:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdddecb749b6298db63963f3e71c9a1fe6646f0b8546bf9b81b1d983f90d3f28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:49:02 compute-0 podman[276962]: 2025-11-24 18:49:02.708695439 +0000 UTC m=+0.136103696 container init dedeb5966677fb9f2b8cc6ea03353cabb960259abac86b3a9e8a9cd0db567b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_galileo, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Nov 24 18:49:02 compute-0 podman[276962]: 2025-11-24 18:49:02.722281341 +0000 UTC m=+0.149689548 container start dedeb5966677fb9f2b8cc6ea03353cabb960259abac86b3a9e8a9cd0db567b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_galileo, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:49:02 compute-0 podman[276962]: 2025-11-24 18:49:02.725858469 +0000 UTC m=+0.153266676 container attach dedeb5966677fb9f2b8cc6ea03353cabb960259abac86b3a9e8a9cd0db567b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_galileo, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:49:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:49:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Nov 24 18:49:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Nov 24 18:49:02 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Nov 24 18:49:03 compute-0 ceph-mon[74927]: pgmap v1003: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 5.4 KiB/s wr, 69 op/s
Nov 24 18:49:03 compute-0 ceph-mon[74927]: osdmap e130: 3 total, 3 up, 3 in
Nov 24 18:49:03 compute-0 ceph-mon[74927]: osdmap e131: 3 total, 3 up, 3 in
Nov 24 18:49:03 compute-0 fervent_galileo[276979]: {
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:     "0": [
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:         {
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:             "devices": [
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:                 "/dev/loop3"
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:             ],
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:             "lv_name": "ceph_lv0",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:             "lv_size": "21470642176",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:             "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:             "name": "ceph_lv0",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:             "tags": {
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:                 "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:                 "ceph.cluster_name": "ceph",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:                 "ceph.crush_device_class": "",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:                 "ceph.encrypted": "0",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:                 "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:                 "ceph.osd_id": "0",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:                 "ceph.type": "block",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:                 "ceph.vdo": "0"
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:             },
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:             "type": "block",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:             "vg_name": "ceph_vg0"
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:         }
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:     ],
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:     "1": [
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:         {
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:             "devices": [
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:                 "/dev/loop4"
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:             ],
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:             "lv_name": "ceph_lv1",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:             "lv_size": "21470642176",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:             "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:             "name": "ceph_lv1",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:             "tags": {
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:                 "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:                 "ceph.cluster_name": "ceph",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:                 "ceph.crush_device_class": "",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:                 "ceph.encrypted": "0",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:                 "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:                 "ceph.osd_id": "1",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:                 "ceph.type": "block",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:                 "ceph.vdo": "0"
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:             },
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:             "type": "block",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:             "vg_name": "ceph_vg1"
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:         }
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:     ],
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:     "2": [
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:         {
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:             "devices": [
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:                 "/dev/loop5"
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:             ],
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:             "lv_name": "ceph_lv2",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:             "lv_size": "21470642176",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:             "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:             "name": "ceph_lv2",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:             "tags": {
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:                 "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:                 "ceph.cluster_name": "ceph",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:                 "ceph.crush_device_class": "",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:                 "ceph.encrypted": "0",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:                 "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:                 "ceph.osd_id": "2",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:                 "ceph.type": "block",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:                 "ceph.vdo": "0"
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:             },
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:             "type": "block",
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:             "vg_name": "ceph_vg2"
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:         }
Nov 24 18:49:03 compute-0 fervent_galileo[276979]:     ]
Nov 24 18:49:03 compute-0 fervent_galileo[276979]: }
Nov 24 18:49:03 compute-0 systemd[1]: libpod-dedeb5966677fb9f2b8cc6ea03353cabb960259abac86b3a9e8a9cd0db567b9e.scope: Deactivated successfully.
Nov 24 18:49:03 compute-0 podman[276988]: 2025-11-24 18:49:03.553291219 +0000 UTC m=+0.026827968 container died dedeb5966677fb9f2b8cc6ea03353cabb960259abac86b3a9e8a9cd0db567b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_galileo, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 24 18:49:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-fdddecb749b6298db63963f3e71c9a1fe6646f0b8546bf9b81b1d983f90d3f28-merged.mount: Deactivated successfully.
Nov 24 18:49:03 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1006: 321 pgs: 321 active+clean; 89 MiB data, 238 MiB used, 60 GiB / 60 GiB avail; 118 KiB/s rd, 8.0 MiB/s wr, 171 op/s
Nov 24 18:49:03 compute-0 podman[276988]: 2025-11-24 18:49:03.613257958 +0000 UTC m=+0.086794637 container remove dedeb5966677fb9f2b8cc6ea03353cabb960259abac86b3a9e8a9cd0db567b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_galileo, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 18:49:03 compute-0 systemd[1]: libpod-conmon-dedeb5966677fb9f2b8cc6ea03353cabb960259abac86b3a9e8a9cd0db567b9e.scope: Deactivated successfully.
Nov 24 18:49:03 compute-0 sudo[276852]: pam_unix(sudo:session): session closed for user root
Nov 24 18:49:03 compute-0 sudo[277002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:49:03 compute-0 sudo[277002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:49:03 compute-0 sudo[277002]: pam_unix(sudo:session): session closed for user root
Nov 24 18:49:03 compute-0 sudo[277027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:49:03 compute-0 sudo[277027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:49:03 compute-0 sudo[277027]: pam_unix(sudo:session): session closed for user root
Nov 24 18:49:03 compute-0 sudo[277052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:49:03 compute-0 sudo[277052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:49:03 compute-0 sudo[277052]: pam_unix(sudo:session): session closed for user root
Nov 24 18:49:03 compute-0 sudo[277077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- raw list --format json
Nov 24 18:49:03 compute-0 sudo[277077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:49:04 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Nov 24 18:49:04 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Nov 24 18:49:04 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Nov 24 18:49:04 compute-0 podman[277143]: 2025-11-24 18:49:04.291396851 +0000 UTC m=+0.040552674 container create a7ad43334d7e8713b5ac6ca9bad942530ac735cd97b9f68f72bba8cf20471501 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_spence, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:49:04 compute-0 systemd[1]: Started libpod-conmon-a7ad43334d7e8713b5ac6ca9bad942530ac735cd97b9f68f72bba8cf20471501.scope.
Nov 24 18:49:04 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:49:04 compute-0 podman[277143]: 2025-11-24 18:49:04.365577738 +0000 UTC m=+0.114733631 container init a7ad43334d7e8713b5ac6ca9bad942530ac735cd97b9f68f72bba8cf20471501 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:49:04 compute-0 podman[277143]: 2025-11-24 18:49:04.272665102 +0000 UTC m=+0.021820945 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:49:04 compute-0 podman[277143]: 2025-11-24 18:49:04.372747324 +0000 UTC m=+0.121903137 container start a7ad43334d7e8713b5ac6ca9bad942530ac735cd97b9f68f72bba8cf20471501 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_spence, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 18:49:04 compute-0 podman[277143]: 2025-11-24 18:49:04.375887601 +0000 UTC m=+0.125043444 container attach a7ad43334d7e8713b5ac6ca9bad942530ac735cd97b9f68f72bba8cf20471501 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 24 18:49:04 compute-0 strange_spence[277161]: 167 167
Nov 24 18:49:04 compute-0 systemd[1]: libpod-a7ad43334d7e8713b5ac6ca9bad942530ac735cd97b9f68f72bba8cf20471501.scope: Deactivated successfully.
Nov 24 18:49:04 compute-0 podman[277143]: 2025-11-24 18:49:04.379686274 +0000 UTC m=+0.128842117 container died a7ad43334d7e8713b5ac6ca9bad942530ac735cd97b9f68f72bba8cf20471501 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_spence, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 24 18:49:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-e367748fc87e54430004e6558c54402f0d2c3731ed1f3a731677d7abfebca40d-merged.mount: Deactivated successfully.
Nov 24 18:49:04 compute-0 podman[277143]: 2025-11-24 18:49:04.436140827 +0000 UTC m=+0.185296650 container remove a7ad43334d7e8713b5ac6ca9bad942530ac735cd97b9f68f72bba8cf20471501 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_spence, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 18:49:04 compute-0 systemd[1]: libpod-conmon-a7ad43334d7e8713b5ac6ca9bad942530ac735cd97b9f68f72bba8cf20471501.scope: Deactivated successfully.
Nov 24 18:49:04 compute-0 podman[277157]: 2025-11-24 18:49:04.506284285 +0000 UTC m=+0.175831418 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 18:49:04 compute-0 podman[277212]: 2025-11-24 18:49:04.661211381 +0000 UTC m=+0.046646224 container create de9c6ba0a480d34e9d6971b871d7fdeb7b7e600935e55b55bac18a4354de04ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 24 18:49:04 compute-0 systemd[1]: Started libpod-conmon-de9c6ba0a480d34e9d6971b871d7fdeb7b7e600935e55b55bac18a4354de04ef.scope.
Nov 24 18:49:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:49:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:49:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:49:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:49:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:49:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:49:04 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:49:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be3dbcd603e72aa56958d19ecafca17d3f25b045938aff7243e71d1a1e3534e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:49:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be3dbcd603e72aa56958d19ecafca17d3f25b045938aff7243e71d1a1e3534e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:49:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be3dbcd603e72aa56958d19ecafca17d3f25b045938aff7243e71d1a1e3534e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:49:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be3dbcd603e72aa56958d19ecafca17d3f25b045938aff7243e71d1a1e3534e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:49:04 compute-0 podman[277212]: 2025-11-24 18:49:04.646199493 +0000 UTC m=+0.031634356 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:49:04 compute-0 podman[277212]: 2025-11-24 18:49:04.744229204 +0000 UTC m=+0.129664057 container init de9c6ba0a480d34e9d6971b871d7fdeb7b7e600935e55b55bac18a4354de04ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nightingale, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:49:04 compute-0 podman[277212]: 2025-11-24 18:49:04.752076206 +0000 UTC m=+0.137511049 container start de9c6ba0a480d34e9d6971b871d7fdeb7b7e600935e55b55bac18a4354de04ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nightingale, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 24 18:49:04 compute-0 podman[277212]: 2025-11-24 18:49:04.754749582 +0000 UTC m=+0.140184425 container attach de9c6ba0a480d34e9d6971b871d7fdeb7b7e600935e55b55bac18a4354de04ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:49:05 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Nov 24 18:49:05 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Nov 24 18:49:05 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Nov 24 18:49:05 compute-0 ceph-mon[74927]: pgmap v1006: 321 pgs: 321 active+clean; 89 MiB data, 238 MiB used, 60 GiB / 60 GiB avail; 118 KiB/s rd, 8.0 MiB/s wr, 171 op/s
Nov 24 18:49:05 compute-0 ceph-mon[74927]: osdmap e132: 3 total, 3 up, 3 in
Nov 24 18:49:05 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1009: 321 pgs: 321 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 216 KiB/s rd, 28 MiB/s wr, 310 op/s
Nov 24 18:49:05 compute-0 dazzling_nightingale[277228]: {
Nov 24 18:49:05 compute-0 dazzling_nightingale[277228]:     "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 18:49:05 compute-0 dazzling_nightingale[277228]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:49:05 compute-0 dazzling_nightingale[277228]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 18:49:05 compute-0 dazzling_nightingale[277228]:         "osd_id": 0,
Nov 24 18:49:05 compute-0 dazzling_nightingale[277228]:         "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:49:05 compute-0 dazzling_nightingale[277228]:         "type": "bluestore"
Nov 24 18:49:05 compute-0 dazzling_nightingale[277228]:     },
Nov 24 18:49:05 compute-0 dazzling_nightingale[277228]:     "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 18:49:05 compute-0 dazzling_nightingale[277228]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:49:05 compute-0 dazzling_nightingale[277228]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 18:49:05 compute-0 dazzling_nightingale[277228]:         "osd_id": 1,
Nov 24 18:49:05 compute-0 dazzling_nightingale[277228]:         "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:49:05 compute-0 dazzling_nightingale[277228]:         "type": "bluestore"
Nov 24 18:49:05 compute-0 dazzling_nightingale[277228]:     },
Nov 24 18:49:05 compute-0 dazzling_nightingale[277228]:     "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 18:49:05 compute-0 dazzling_nightingale[277228]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:49:05 compute-0 dazzling_nightingale[277228]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 18:49:05 compute-0 dazzling_nightingale[277228]:         "osd_id": 2,
Nov 24 18:49:05 compute-0 dazzling_nightingale[277228]:         "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:49:05 compute-0 dazzling_nightingale[277228]:         "type": "bluestore"
Nov 24 18:49:05 compute-0 dazzling_nightingale[277228]:     }
Nov 24 18:49:05 compute-0 dazzling_nightingale[277228]: }
Nov 24 18:49:05 compute-0 systemd[1]: libpod-de9c6ba0a480d34e9d6971b871d7fdeb7b7e600935e55b55bac18a4354de04ef.scope: Deactivated successfully.
Nov 24 18:49:05 compute-0 systemd[1]: libpod-de9c6ba0a480d34e9d6971b871d7fdeb7b7e600935e55b55bac18a4354de04ef.scope: Consumed 1.016s CPU time.
Nov 24 18:49:05 compute-0 podman[277261]: 2025-11-24 18:49:05.800641963 +0000 UTC m=+0.021485648 container died de9c6ba0a480d34e9d6971b871d7fdeb7b7e600935e55b55bac18a4354de04ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nightingale, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 18:49:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-be3dbcd603e72aa56958d19ecafca17d3f25b045938aff7243e71d1a1e3534e0-merged.mount: Deactivated successfully.
Nov 24 18:49:05 compute-0 podman[277261]: 2025-11-24 18:49:05.848351572 +0000 UTC m=+0.069195257 container remove de9c6ba0a480d34e9d6971b871d7fdeb7b7e600935e55b55bac18a4354de04ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nightingale, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 18:49:05 compute-0 systemd[1]: libpod-conmon-de9c6ba0a480d34e9d6971b871d7fdeb7b7e600935e55b55bac18a4354de04ef.scope: Deactivated successfully.
Nov 24 18:49:05 compute-0 sudo[277077]: pam_unix(sudo:session): session closed for user root
Nov 24 18:49:05 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:49:05 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:49:05 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:49:05 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:49:05 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 37dd614d-b987-4132-a9e9-7436386b3ded does not exist
Nov 24 18:49:05 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 06b80360-7a1b-4b6e-95eb-422c433e9e84 does not exist
Nov 24 18:49:05 compute-0 sudo[277277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:49:05 compute-0 sudo[277277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:49:05 compute-0 sudo[277277]: pam_unix(sudo:session): session closed for user root
Nov 24 18:49:06 compute-0 podman[277301]: 2025-11-24 18:49:06.051884298 +0000 UTC m=+0.058323260 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 24 18:49:06 compute-0 sudo[277308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:49:06 compute-0 sudo[277308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:49:06 compute-0 sudo[277308]: pam_unix(sudo:session): session closed for user root
Nov 24 18:49:06 compute-0 podman[277346]: 2025-11-24 18:49:06.135687341 +0000 UTC m=+0.056087175 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 24 18:49:06 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Nov 24 18:49:06 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Nov 24 18:49:06 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Nov 24 18:49:06 compute-0 ceph-mon[74927]: osdmap e133: 3 total, 3 up, 3 in
Nov 24 18:49:06 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:49:06 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:49:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Nov 24 18:49:07 compute-0 ceph-mon[74927]: pgmap v1009: 321 pgs: 321 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 216 KiB/s rd, 28 MiB/s wr, 310 op/s
Nov 24 18:49:07 compute-0 ceph-mon[74927]: osdmap e134: 3 total, 3 up, 3 in
Nov 24 18:49:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Nov 24 18:49:07 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Nov 24 18:49:07 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1012: 321 pgs: 321 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 16 MiB/s wr, 107 op/s
Nov 24 18:49:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:49:08 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Nov 24 18:49:08 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Nov 24 18:49:08 compute-0 ceph-mon[74927]: osdmap e135: 3 total, 3 up, 3 in
Nov 24 18:49:08 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Nov 24 18:49:09 compute-0 ceph-mon[74927]: pgmap v1012: 321 pgs: 321 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 16 MiB/s wr, 107 op/s
Nov 24 18:49:09 compute-0 ceph-mon[74927]: osdmap e136: 3 total, 3 up, 3 in
Nov 24 18:49:09 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1014: 321 pgs: 321 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 133 KiB/s rd, 15 KiB/s wr, 191 op/s
Nov 24 18:49:10 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Nov 24 18:49:10 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Nov 24 18:49:10 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Nov 24 18:49:10 compute-0 nova_compute[270693]: 2025-11-24 18:49:10.528 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:49:10 compute-0 nova_compute[270693]: 2025-11-24 18:49:10.563 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:49:10 compute-0 nova_compute[270693]: 2025-11-24 18:49:10.564 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:49:10 compute-0 nova_compute[270693]: 2025-11-24 18:49:10.564 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:49:10 compute-0 nova_compute[270693]: 2025-11-24 18:49:10.564 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 18:49:10 compute-0 nova_compute[270693]: 2025-11-24 18:49:10.565 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:49:11 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 18:49:11 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2972841125' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:49:11 compute-0 nova_compute[270693]: 2025-11-24 18:49:11.037 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:49:11 compute-0 nova_compute[270693]: 2025-11-24 18:49:11.211 270697 WARNING nova.virt.libvirt.driver [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 18:49:11 compute-0 nova_compute[270693]: 2025-11-24 18:49:11.212 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5139MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 18:49:11 compute-0 nova_compute[270693]: 2025-11-24 18:49:11.212 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:49:11 compute-0 nova_compute[270693]: 2025-11-24 18:49:11.213 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:49:11 compute-0 nova_compute[270693]: 2025-11-24 18:49:11.299 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 18:49:11 compute-0 nova_compute[270693]: 2025-11-24 18:49:11.299 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 18:49:11 compute-0 ceph-mon[74927]: pgmap v1014: 321 pgs: 321 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 133 KiB/s rd, 15 KiB/s wr, 191 op/s
Nov 24 18:49:11 compute-0 ceph-mon[74927]: osdmap e137: 3 total, 3 up, 3 in
Nov 24 18:49:11 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2972841125' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:49:11 compute-0 nova_compute[270693]: 2025-11-24 18:49:11.479 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Refreshing inventories for resource provider d1cce7ec-de83-4810-91f8-1852891da8a6 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 24 18:49:11 compute-0 nova_compute[270693]: 2025-11-24 18:49:11.502 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Updating ProviderTree inventory for provider d1cce7ec-de83-4810-91f8-1852891da8a6 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 24 18:49:11 compute-0 nova_compute[270693]: 2025-11-24 18:49:11.502 270697 DEBUG nova.compute.provider_tree [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Updating inventory in ProviderTree for provider d1cce7ec-de83-4810-91f8-1852891da8a6 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 18:49:11 compute-0 nova_compute[270693]: 2025-11-24 18:49:11.527 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Refreshing aggregate associations for resource provider d1cce7ec-de83-4810-91f8-1852891da8a6, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 24 18:49:11 compute-0 nova_compute[270693]: 2025-11-24 18:49:11.553 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Refreshing trait associations for resource provider d1cce7ec-de83-4810-91f8-1852891da8a6, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_ACCELERATORS,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_MMX,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_FMA3,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE4A,HW_CPU_X86_F16C,HW_CPU_X86_SSE2,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_E1000 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 24 18:49:11 compute-0 nova_compute[270693]: 2025-11-24 18:49:11.575 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:49:11 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1016: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 195 KiB/s rd, 19 KiB/s wr, 272 op/s
Nov 24 18:49:11 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 18:49:11 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/697120505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:49:12 compute-0 nova_compute[270693]: 2025-11-24 18:49:12.007 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:49:12 compute-0 nova_compute[270693]: 2025-11-24 18:49:12.012 270697 DEBUG nova.compute.provider_tree [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed in ProviderTree for provider: d1cce7ec-de83-4810-91f8-1852891da8a6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 18:49:12 compute-0 nova_compute[270693]: 2025-11-24 18:49:12.026 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed for provider d1cce7ec-de83-4810-91f8-1852891da8a6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 18:49:12 compute-0 nova_compute[270693]: 2025-11-24 18:49:12.027 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 18:49:12 compute-0 nova_compute[270693]: 2025-11-24 18:49:12.028 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:49:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Nov 24 18:49:12 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/697120505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:49:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Nov 24 18:49:12 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Nov 24 18:49:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 18:49:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Nov 24 18:49:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Nov 24 18:49:12 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Nov 24 18:49:13 compute-0 nova_compute[270693]: 2025-11-24 18:49:13.023 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:49:13 compute-0 nova_compute[270693]: 2025-11-24 18:49:13.024 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:49:13 compute-0 nova_compute[270693]: 2025-11-24 18:49:13.044 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:49:13 compute-0 nova_compute[270693]: 2025-11-24 18:49:13.044 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 18:49:13 compute-0 nova_compute[270693]: 2025-11-24 18:49:13.044 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 18:49:13 compute-0 nova_compute[270693]: 2025-11-24 18:49:13.058 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 18:49:13 compute-0 nova_compute[270693]: 2025-11-24 18:49:13.059 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:49:13 compute-0 nova_compute[270693]: 2025-11-24 18:49:13.059 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:49:13 compute-0 nova_compute[270693]: 2025-11-24 18:49:13.059 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 18:49:13 compute-0 ceph-mon[74927]: pgmap v1016: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 195 KiB/s rd, 19 KiB/s wr, 272 op/s
Nov 24 18:49:13 compute-0 ceph-mon[74927]: osdmap e138: 3 total, 3 up, 3 in
Nov 24 18:49:13 compute-0 ceph-mon[74927]: osdmap e139: 3 total, 3 up, 3 in
Nov 24 18:49:13 compute-0 nova_compute[270693]: 2025-11-24 18:49:13.530 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:49:13 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1019: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 248 KiB/s rd, 24 KiB/s wr, 347 op/s
Nov 24 18:49:13 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Nov 24 18:49:13 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Nov 24 18:49:13 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Nov 24 18:49:14 compute-0 nova_compute[270693]: 2025-11-24 18:49:14.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:49:14 compute-0 nova_compute[270693]: 2025-11-24 18:49:14.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:49:14 compute-0 nova_compute[270693]: 2025-11-24 18:49:14.530 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:49:14 compute-0 ceph-mon[74927]: pgmap v1019: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 248 KiB/s rd, 24 KiB/s wr, 347 op/s
Nov 24 18:49:14 compute-0 ceph-mon[74927]: osdmap e140: 3 total, 3 up, 3 in
Nov 24 18:49:15 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Nov 24 18:49:15 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Nov 24 18:49:15 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Nov 24 18:49:15 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1022: 321 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 314 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 154 KiB/s rd, 12 KiB/s wr, 209 op/s
Nov 24 18:49:16 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Nov 24 18:49:16 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Nov 24 18:49:16 compute-0 ceph-mon[74927]: osdmap e141: 3 total, 3 up, 3 in
Nov 24 18:49:16 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Nov 24 18:49:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Nov 24 18:49:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Nov 24 18:49:17 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Nov 24 18:49:17 compute-0 ceph-mon[74927]: pgmap v1022: 321 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 314 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 154 KiB/s rd, 12 KiB/s wr, 209 op/s
Nov 24 18:49:17 compute-0 ceph-mon[74927]: osdmap e142: 3 total, 3 up, 3 in
Nov 24 18:49:17 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1025: 321 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 314 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 6.0 KiB/s wr, 111 op/s
Nov 24 18:49:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:49:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Nov 24 18:49:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Nov 24 18:49:17 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Nov 24 18:49:18 compute-0 ceph-mon[74927]: osdmap e143: 3 total, 3 up, 3 in
Nov 24 18:49:18 compute-0 ceph-mon[74927]: osdmap e144: 3 total, 3 up, 3 in
Nov 24 18:49:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 18:49:19 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2729009600' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 18:49:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 18:49:19 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2729009600' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 18:49:19 compute-0 ceph-mon[74927]: pgmap v1025: 321 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 314 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 6.0 KiB/s wr, 111 op/s
Nov 24 18:49:19 compute-0 ceph-mon[74927]: from='client.? 192.168.122.10:0/2729009600' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 18:49:19 compute-0 ceph-mon[74927]: from='client.? 192.168.122.10:0/2729009600' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 18:49:19 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1027: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 132 KiB/s rd, 11 KiB/s wr, 181 op/s
Nov 24 18:49:21 compute-0 ceph-mon[74927]: pgmap v1027: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 132 KiB/s rd, 11 KiB/s wr, 181 op/s
Nov 24 18:49:21 compute-0 nova_compute[270693]: 2025-11-24 18:49:21.484 270697 DEBUG oslo_concurrency.lockutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Acquiring lock "81f5edb9-2756-4a6e-bc3a-fa770161d562" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:49:21 compute-0 nova_compute[270693]: 2025-11-24 18:49:21.485 270697 DEBUG oslo_concurrency.lockutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "81f5edb9-2756-4a6e-bc3a-fa770161d562" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:49:21 compute-0 nova_compute[270693]: 2025-11-24 18:49:21.511 270697 DEBUG nova.compute.manager [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 24 18:49:21 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1028: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 127 KiB/s rd, 11 KiB/s wr, 175 op/s
Nov 24 18:49:21 compute-0 nova_compute[270693]: 2025-11-24 18:49:21.654 270697 DEBUG oslo_concurrency.lockutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:49:21 compute-0 nova_compute[270693]: 2025-11-24 18:49:21.655 270697 DEBUG oslo_concurrency.lockutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:49:21 compute-0 nova_compute[270693]: 2025-11-24 18:49:21.666 270697 DEBUG nova.virt.hardware [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 24 18:49:21 compute-0 nova_compute[270693]: 2025-11-24 18:49:21.667 270697 INFO nova.compute.claims [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Claim successful on node compute-0.ctlplane.example.com
Nov 24 18:49:21 compute-0 nova_compute[270693]: 2025-11-24 18:49:21.790 270697 DEBUG oslo_concurrency.processutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:49:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 18:49:22 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2586813287' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:49:22 compute-0 nova_compute[270693]: 2025-11-24 18:49:22.224 270697 DEBUG oslo_concurrency.processutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:49:22 compute-0 nova_compute[270693]: 2025-11-24 18:49:22.231 270697 DEBUG nova.compute.provider_tree [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Inventory has not changed in ProviderTree for provider: d1cce7ec-de83-4810-91f8-1852891da8a6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 18:49:22 compute-0 nova_compute[270693]: 2025-11-24 18:49:22.256 270697 DEBUG nova.scheduler.client.report [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Inventory has not changed for provider d1cce7ec-de83-4810-91f8-1852891da8a6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 18:49:22 compute-0 nova_compute[270693]: 2025-11-24 18:49:22.319 270697 DEBUG oslo_concurrency.lockutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:49:22 compute-0 nova_compute[270693]: 2025-11-24 18:49:22.320 270697 DEBUG nova.compute.manager [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 24 18:49:22 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2586813287' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:49:22 compute-0 nova_compute[270693]: 2025-11-24 18:49:22.393 270697 DEBUG nova.compute.manager [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 24 18:49:22 compute-0 nova_compute[270693]: 2025-11-24 18:49:22.394 270697 DEBUG nova.network.neutron [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 24 18:49:22 compute-0 nova_compute[270693]: 2025-11-24 18:49:22.434 270697 INFO nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 24 18:49:22 compute-0 nova_compute[270693]: 2025-11-24 18:49:22.456 270697 DEBUG nova.compute.manager [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 24 18:49:22 compute-0 nova_compute[270693]: 2025-11-24 18:49:22.498 270697 INFO nova.virt.block_device [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Booting with volume 57bd14c1-40c4-42ca-854f-95f89e621d53 at /dev/vda
Nov 24 18:49:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:49:22.743 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:49:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:49:22.744 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:49:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:49:22.744 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:49:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:49:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Nov 24 18:49:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Nov 24 18:49:22 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Nov 24 18:49:23 compute-0 nova_compute[270693]: 2025-11-24 18:49:23.035 270697 DEBUG os_brick.utils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 24 18:49:23 compute-0 nova_compute[270693]: 2025-11-24 18:49:23.036 270697 INFO oslo.privsep.daemon [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmp0nuwri68/privsep.sock']
Nov 24 18:49:23 compute-0 nova_compute[270693]: 2025-11-24 18:49:23.048 270697 DEBUG nova.network.neutron [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 24 18:49:23 compute-0 nova_compute[270693]: 2025-11-24 18:49:23.048 270697 DEBUG nova.compute.manager [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 24 18:49:23 compute-0 ceph-mon[74927]: pgmap v1028: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 127 KiB/s rd, 11 KiB/s wr, 175 op/s
Nov 24 18:49:23 compute-0 ceph-mon[74927]: osdmap e145: 3 total, 3 up, 3 in
Nov 24 18:49:23 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1030: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 122 KiB/s rd, 10 KiB/s wr, 168 op/s
Nov 24 18:49:23 compute-0 nova_compute[270693]: 2025-11-24 18:49:23.659 270697 INFO oslo.privsep.daemon [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Spawned new privsep daemon via rootwrap
Nov 24 18:49:23 compute-0 nova_compute[270693]: 2025-11-24 18:49:23.557 277439 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 24 18:49:23 compute-0 nova_compute[270693]: 2025-11-24 18:49:23.561 277439 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 24 18:49:23 compute-0 nova_compute[270693]: 2025-11-24 18:49:23.562 277439 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Nov 24 18:49:23 compute-0 nova_compute[270693]: 2025-11-24 18:49:23.563 277439 INFO oslo.privsep.daemon [-] privsep daemon running as pid 277439
Nov 24 18:49:23 compute-0 nova_compute[270693]: 2025-11-24 18:49:23.662 277439 DEBUG oslo.privsep.daemon [-] privsep: reply[4f5c0fc3-59d2-426d-b66d-e0f9b833d8c0]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 18:49:23 compute-0 nova_compute[270693]: 2025-11-24 18:49:23.748 277439 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:49:23 compute-0 nova_compute[270693]: 2025-11-24 18:49:23.760 277439 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:49:23 compute-0 nova_compute[270693]: 2025-11-24 18:49:23.760 277439 DEBUG oslo.privsep.daemon [-] privsep: reply[cbe559ea-e46a-4768-b017-f437c9ff23b7]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 18:49:23 compute-0 nova_compute[270693]: 2025-11-24 18:49:23.761 277439 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:49:23 compute-0 nova_compute[270693]: 2025-11-24 18:49:23.770 277439 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:49:23 compute-0 nova_compute[270693]: 2025-11-24 18:49:23.770 277439 DEBUG oslo.privsep.daemon [-] privsep: reply[9a60aba2-4c75-40e2-a71e-d22c29c499ff]: (4, ('InitiatorName=iqn.1994-05.com.redhat:cf95ee7bc55e', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 18:49:23 compute-0 nova_compute[270693]: 2025-11-24 18:49:23.772 277439 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:49:23 compute-0 nova_compute[270693]: 2025-11-24 18:49:23.782 277439 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:49:23 compute-0 nova_compute[270693]: 2025-11-24 18:49:23.782 277439 DEBUG oslo.privsep.daemon [-] privsep: reply[b96185d1-919d-4bc9-b1ac-845dd50c2484]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 18:49:23 compute-0 nova_compute[270693]: 2025-11-24 18:49:23.784 277439 DEBUG oslo.privsep.daemon [-] privsep: reply[9e1cd8df-838c-48de-8e4b-4ca1095071e5]: (4, 'ce8f254e-4b98-4140-abc7-8040b35476ad') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 18:49:23 compute-0 nova_compute[270693]: 2025-11-24 18:49:23.784 270697 DEBUG oslo_concurrency.processutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:49:23 compute-0 nova_compute[270693]: 2025-11-24 18:49:23.802 270697 DEBUG oslo_concurrency.processutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] CMD "nvme version" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:49:23 compute-0 nova_compute[270693]: 2025-11-24 18:49:23.805 270697 DEBUG os_brick.initiator.connectors.lightos [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 24 18:49:23 compute-0 nova_compute[270693]: 2025-11-24 18:49:23.806 270697 DEBUG os_brick.initiator.connectors.lightos [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 24 18:49:23 compute-0 nova_compute[270693]: 2025-11-24 18:49:23.806 270697 DEBUG os_brick.initiator.connectors.lightos [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 24 18:49:23 compute-0 nova_compute[270693]: 2025-11-24 18:49:23.807 270697 DEBUG os_brick.utils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] <== get_connector_properties: return (771ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:cf95ee7bc55e', 'do_local_attach': False, 'nvme_hostid': 'b41e453c-5c3a-4251-9262-f13d5e000e9b', 'system uuid': 'ce8f254e-4b98-4140-abc7-8040b35476ad', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 24 18:49:23 compute-0 nova_compute[270693]: 2025-11-24 18:49:23.808 270697 DEBUG nova.virt.block_device [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Updating existing volume attachment record: cab52bb9-c5f2-4664-afde-1d2efa7d2dd3 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 24 18:49:24 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 24 18:49:24 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4007807367' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 18:49:25 compute-0 nova_compute[270693]: 2025-11-24 18:49:25.187 270697 DEBUG nova.compute.manager [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 24 18:49:25 compute-0 nova_compute[270693]: 2025-11-24 18:49:25.188 270697 DEBUG nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 24 18:49:25 compute-0 nova_compute[270693]: 2025-11-24 18:49:25.189 270697 INFO nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Creating image(s)
Nov 24 18:49:25 compute-0 nova_compute[270693]: 2025-11-24 18:49:25.189 270697 DEBUG nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 24 18:49:25 compute-0 nova_compute[270693]: 2025-11-24 18:49:25.190 270697 DEBUG nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Ensure instance console log exists: /var/lib/nova/instances/81f5edb9-2756-4a6e-bc3a-fa770161d562/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 24 18:49:25 compute-0 nova_compute[270693]: 2025-11-24 18:49:25.190 270697 DEBUG oslo_concurrency.lockutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:49:25 compute-0 nova_compute[270693]: 2025-11-24 18:49:25.190 270697 DEBUG oslo_concurrency.lockutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:49:25 compute-0 nova_compute[270693]: 2025-11-24 18:49:25.190 270697 DEBUG oslo_concurrency.lockutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:49:25 compute-0 nova_compute[270693]: 2025-11-24 18:49:25.192 270697 DEBUG nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-57bd14c1-40c4-42ca-854f-95f89e621d53', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '57bd14c1-40c4-42ca-854f-95f89e621d53', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '81f5edb9-2756-4a6e-bc3a-fa770161d562', 'attached_at': '', 'detached_at': '', 'volume_id': '57bd14c1-40c4-42ca-854f-95f89e621d53', 'serial': '57bd14c1-40c4-42ca-854f-95f89e621d53'}, 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'boot_index': 0, 'guest_format': None, 'device_type': 'disk', 'attachment_id': 'cab52bb9-c5f2-4664-afde-1d2efa7d2dd3', 'delete_on_termination': True, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 24 18:49:25 compute-0 nova_compute[270693]: 2025-11-24 18:49:25.196 270697 WARNING nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 18:49:25 compute-0 nova_compute[270693]: 2025-11-24 18:49:25.200 270697 DEBUG nova.virt.libvirt.host [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 24 18:49:25 compute-0 nova_compute[270693]: 2025-11-24 18:49:25.201 270697 DEBUG nova.virt.libvirt.host [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 24 18:49:25 compute-0 nova_compute[270693]: 2025-11-24 18:49:25.204 270697 DEBUG nova.virt.libvirt.host [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 24 18:49:25 compute-0 nova_compute[270693]: 2025-11-24 18:49:25.204 270697 DEBUG nova.virt.libvirt.host [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 24 18:49:25 compute-0 nova_compute[270693]: 2025-11-24 18:49:25.204 270697 DEBUG nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 18:49:25 compute-0 nova_compute[270693]: 2025-11-24 18:49:25.205 270697 DEBUG nova.virt.hardware [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T18:48:11Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fa20e92f-7c52-40ac-838f-32e378b8ec04',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 24 18:49:25 compute-0 nova_compute[270693]: 2025-11-24 18:49:25.205 270697 DEBUG nova.virt.hardware [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 24 18:49:25 compute-0 nova_compute[270693]: 2025-11-24 18:49:25.205 270697 DEBUG nova.virt.hardware [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 24 18:49:25 compute-0 nova_compute[270693]: 2025-11-24 18:49:25.205 270697 DEBUG nova.virt.hardware [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 24 18:49:25 compute-0 nova_compute[270693]: 2025-11-24 18:49:25.206 270697 DEBUG nova.virt.hardware [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 24 18:49:25 compute-0 nova_compute[270693]: 2025-11-24 18:49:25.206 270697 DEBUG nova.virt.hardware [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 24 18:49:25 compute-0 nova_compute[270693]: 2025-11-24 18:49:25.206 270697 DEBUG nova.virt.hardware [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 24 18:49:25 compute-0 nova_compute[270693]: 2025-11-24 18:49:25.206 270697 DEBUG nova.virt.hardware [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 24 18:49:25 compute-0 nova_compute[270693]: 2025-11-24 18:49:25.207 270697 DEBUG nova.virt.hardware [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 24 18:49:25 compute-0 nova_compute[270693]: 2025-11-24 18:49:25.207 270697 DEBUG nova.virt.hardware [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 24 18:49:25 compute-0 nova_compute[270693]: 2025-11-24 18:49:25.207 270697 DEBUG nova.virt.hardware [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 24 18:49:25 compute-0 nova_compute[270693]: 2025-11-24 18:49:25.228 270697 DEBUG nova.storage.rbd_utils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] rbd image 81f5edb9-2756-4a6e-bc3a-fa770161d562_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 18:49:25 compute-0 nova_compute[270693]: 2025-11-24 18:49:25.232 270697 DEBUG nova.privsep.utils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 24 18:49:25 compute-0 nova_compute[270693]: 2025-11-24 18:49:25.233 270697 DEBUG oslo_concurrency.processutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:49:25 compute-0 ceph-mon[74927]: pgmap v1030: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 122 KiB/s rd, 10 KiB/s wr, 168 op/s
Nov 24 18:49:25 compute-0 ceph-mon[74927]: from='client.? 192.168.122.10:0/4007807367' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 18:49:25 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 24 18:49:25 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1317962750' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 18:49:25 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1031: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 8.0 KiB/s wr, 131 op/s
Nov 24 18:49:25 compute-0 nova_compute[270693]: 2025-11-24 18:49:25.627 270697 DEBUG oslo_concurrency.processutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.395s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:49:25 compute-0 nova_compute[270693]: 2025-11-24 18:49:25.629 270697 DEBUG oslo_concurrency.lockutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:49:25 compute-0 nova_compute[270693]: 2025-11-24 18:49:25.629 270697 DEBUG oslo_concurrency.lockutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:49:25 compute-0 nova_compute[270693]: 2025-11-24 18:49:25.631 270697 DEBUG oslo_concurrency.lockutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:49:25 compute-0 systemd[1]: Starting libvirt secret daemon...
Nov 24 18:49:25 compute-0 systemd[1]: Started libvirt secret daemon.
Nov 24 18:49:25 compute-0 nova_compute[270693]: 2025-11-24 18:49:25.695 270697 DEBUG nova.objects.instance [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lazy-loading 'pci_devices' on Instance uuid 81f5edb9-2756-4a6e-bc3a-fa770161d562 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 18:49:25 compute-0 nova_compute[270693]: 2025-11-24 18:49:25.711 270697 DEBUG nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] End _get_guest_xml xml=<domain type="kvm">
Nov 24 18:49:25 compute-0 nova_compute[270693]:   <uuid>81f5edb9-2756-4a6e-bc3a-fa770161d562</uuid>
Nov 24 18:49:25 compute-0 nova_compute[270693]:   <name>instance-00000001</name>
Nov 24 18:49:25 compute-0 nova_compute[270693]:   <memory>131072</memory>
Nov 24 18:49:25 compute-0 nova_compute[270693]:   <vcpu>1</vcpu>
Nov 24 18:49:25 compute-0 nova_compute[270693]:   <metadata>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 18:49:25 compute-0 nova_compute[270693]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:       <nova:name>instance-depend-image</nova:name>
Nov 24 18:49:25 compute-0 nova_compute[270693]:       <nova:creationTime>2025-11-24 18:49:25</nova:creationTime>
Nov 24 18:49:25 compute-0 nova_compute[270693]:       <nova:flavor name="m1.nano">
Nov 24 18:49:25 compute-0 nova_compute[270693]:         <nova:memory>128</nova:memory>
Nov 24 18:49:25 compute-0 nova_compute[270693]:         <nova:disk>1</nova:disk>
Nov 24 18:49:25 compute-0 nova_compute[270693]:         <nova:swap>0</nova:swap>
Nov 24 18:49:25 compute-0 nova_compute[270693]:         <nova:ephemeral>0</nova:ephemeral>
Nov 24 18:49:25 compute-0 nova_compute[270693]:         <nova:vcpus>1</nova:vcpus>
Nov 24 18:49:25 compute-0 nova_compute[270693]:       </nova:flavor>
Nov 24 18:49:25 compute-0 nova_compute[270693]:       <nova:owner>
Nov 24 18:49:25 compute-0 nova_compute[270693]:         <nova:user uuid="c5033dc71ef0458982cc0f8121662150">tempest-ImageDependencyTests-981399736-project-member</nova:user>
Nov 24 18:49:25 compute-0 nova_compute[270693]:         <nova:project uuid="0d692fe6fe5e446c86fe7152afbbaa17">tempest-ImageDependencyTests-981399736</nova:project>
Nov 24 18:49:25 compute-0 nova_compute[270693]:       </nova:owner>
Nov 24 18:49:25 compute-0 nova_compute[270693]:       <nova:ports/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     </nova:instance>
Nov 24 18:49:25 compute-0 nova_compute[270693]:   </metadata>
Nov 24 18:49:25 compute-0 nova_compute[270693]:   <sysinfo type="smbios">
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <system>
Nov 24 18:49:25 compute-0 nova_compute[270693]:       <entry name="manufacturer">RDO</entry>
Nov 24 18:49:25 compute-0 nova_compute[270693]:       <entry name="product">OpenStack Compute</entry>
Nov 24 18:49:25 compute-0 nova_compute[270693]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 18:49:25 compute-0 nova_compute[270693]:       <entry name="serial">81f5edb9-2756-4a6e-bc3a-fa770161d562</entry>
Nov 24 18:49:25 compute-0 nova_compute[270693]:       <entry name="uuid">81f5edb9-2756-4a6e-bc3a-fa770161d562</entry>
Nov 24 18:49:25 compute-0 nova_compute[270693]:       <entry name="family">Virtual Machine</entry>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     </system>
Nov 24 18:49:25 compute-0 nova_compute[270693]:   </sysinfo>
Nov 24 18:49:25 compute-0 nova_compute[270693]:   <os>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <boot dev="hd"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <smbios mode="sysinfo"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:   </os>
Nov 24 18:49:25 compute-0 nova_compute[270693]:   <features>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <acpi/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <apic/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <vmcoreinfo/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:   </features>
Nov 24 18:49:25 compute-0 nova_compute[270693]:   <clock offset="utc">
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <timer name="pit" tickpolicy="delay"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <timer name="hpet" present="no"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:   </clock>
Nov 24 18:49:25 compute-0 nova_compute[270693]:   <cpu mode="host-model" match="exact">
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <topology sockets="1" cores="1" threads="1"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:   </cpu>
Nov 24 18:49:25 compute-0 nova_compute[270693]:   <devices>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <disk type="network" device="cdrom">
Nov 24 18:49:25 compute-0 nova_compute[270693]:       <driver type="raw" cache="none"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:       <source protocol="rbd" name="vms/81f5edb9-2756-4a6e-bc3a-fa770161d562_disk.config">
Nov 24 18:49:25 compute-0 nova_compute[270693]:         <host name="192.168.122.100" port="6789"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:       </source>
Nov 24 18:49:25 compute-0 nova_compute[270693]:       <auth username="openstack">
Nov 24 18:49:25 compute-0 nova_compute[270693]:         <secret type="ceph" uuid="e5ee928f-099b-569b-93c9-ecf025cbb50d"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:       </auth>
Nov 24 18:49:25 compute-0 nova_compute[270693]:       <target dev="sda" bus="sata"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     </disk>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <disk type="network" device="disk">
Nov 24 18:49:25 compute-0 nova_compute[270693]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:       <source protocol="rbd" name="volumes/volume-57bd14c1-40c4-42ca-854f-95f89e621d53">
Nov 24 18:49:25 compute-0 nova_compute[270693]:         <host name="192.168.122.100" port="6789"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:       </source>
Nov 24 18:49:25 compute-0 nova_compute[270693]:       <auth username="openstack">
Nov 24 18:49:25 compute-0 nova_compute[270693]:         <secret type="ceph" uuid="e5ee928f-099b-569b-93c9-ecf025cbb50d"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:       </auth>
Nov 24 18:49:25 compute-0 nova_compute[270693]:       <target dev="vda" bus="virtio"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:       <serial>57bd14c1-40c4-42ca-854f-95f89e621d53</serial>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     </disk>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <serial type="pty">
Nov 24 18:49:25 compute-0 nova_compute[270693]:       <log file="/var/lib/nova/instances/81f5edb9-2756-4a6e-bc3a-fa770161d562/console.log" append="off"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     </serial>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <video>
Nov 24 18:49:25 compute-0 nova_compute[270693]:       <model type="virtio"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     </video>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <input type="tablet" bus="usb"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <rng model="virtio">
Nov 24 18:49:25 compute-0 nova_compute[270693]:       <backend model="random">/dev/urandom</backend>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     </rng>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <controller type="usb" index="0"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     <memballoon model="virtio">
Nov 24 18:49:25 compute-0 nova_compute[270693]:       <stats period="10"/>
Nov 24 18:49:25 compute-0 nova_compute[270693]:     </memballoon>
Nov 24 18:49:25 compute-0 nova_compute[270693]:   </devices>
Nov 24 18:49:25 compute-0 nova_compute[270693]: </domain>
Nov 24 18:49:25 compute-0 nova_compute[270693]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 24 18:49:25 compute-0 nova_compute[270693]: 2025-11-24 18:49:25.765 270697 DEBUG nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 18:49:25 compute-0 nova_compute[270693]: 2025-11-24 18:49:25.765 270697 DEBUG nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 18:49:25 compute-0 nova_compute[270693]: 2025-11-24 18:49:25.766 270697 INFO nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Using config drive
Nov 24 18:49:25 compute-0 nova_compute[270693]: 2025-11-24 18:49:25.791 270697 DEBUG nova.storage.rbd_utils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] rbd image 81f5edb9-2756-4a6e-bc3a-fa770161d562_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 18:49:26 compute-0 nova_compute[270693]: 2025-11-24 18:49:26.290 270697 INFO nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Creating config drive at /var/lib/nova/instances/81f5edb9-2756-4a6e-bc3a-fa770161d562/disk.config
Nov 24 18:49:26 compute-0 nova_compute[270693]: 2025-11-24 18:49:26.299 270697 DEBUG oslo_concurrency.processutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/81f5edb9-2756-4a6e-bc3a-fa770161d562/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoeicw7cu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:49:26 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1317962750' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 18:49:26 compute-0 nova_compute[270693]: 2025-11-24 18:49:26.443 270697 DEBUG oslo_concurrency.processutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/81f5edb9-2756-4a6e-bc3a-fa770161d562/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoeicw7cu" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:49:26 compute-0 nova_compute[270693]: 2025-11-24 18:49:26.467 270697 DEBUG nova.storage.rbd_utils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] rbd image 81f5edb9-2756-4a6e-bc3a-fa770161d562_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 18:49:26 compute-0 nova_compute[270693]: 2025-11-24 18:49:26.470 270697 DEBUG oslo_concurrency.processutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/81f5edb9-2756-4a6e-bc3a-fa770161d562/disk.config 81f5edb9-2756-4a6e-bc3a-fa770161d562_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:49:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Nov 24 18:49:27 compute-0 ceph-mon[74927]: pgmap v1031: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 8.0 KiB/s wr, 131 op/s
Nov 24 18:49:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Nov 24 18:49:27 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Nov 24 18:49:27 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1033: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.9 KiB/s wr, 34 op/s
Nov 24 18:49:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:49:28 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Nov 24 18:49:28 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Nov 24 18:49:28 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Nov 24 18:49:28 compute-0 ceph-mon[74927]: osdmap e146: 3 total, 3 up, 3 in
Nov 24 18:49:28 compute-0 nova_compute[270693]: 2025-11-24 18:49:28.566 270697 DEBUG oslo_concurrency.processutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/81f5edb9-2756-4a6e-bc3a-fa770161d562/disk.config 81f5edb9-2756-4a6e-bc3a-fa770161d562_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:49:28 compute-0 nova_compute[270693]: 2025-11-24 18:49:28.567 270697 INFO nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Deleting local config drive /var/lib/nova/instances/81f5edb9-2756-4a6e-bc3a-fa770161d562/disk.config because it was imported into RBD.
Nov 24 18:49:28 compute-0 systemd-machined[232503]: New machine qemu-1-instance-00000001.
Nov 24 18:49:28 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Nov 24 18:49:29 compute-0 nova_compute[270693]: 2025-11-24 18:49:29.266 270697 DEBUG nova.virt.driver [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Emitting event <LifecycleEvent: 1764010169.26571, 81f5edb9-2756-4a6e-bc3a-fa770161d562 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 18:49:29 compute-0 nova_compute[270693]: 2025-11-24 18:49:29.268 270697 INFO nova.compute.manager [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] VM Resumed (Lifecycle Event)
Nov 24 18:49:29 compute-0 nova_compute[270693]: 2025-11-24 18:49:29.277 270697 DEBUG nova.compute.manager [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 24 18:49:29 compute-0 nova_compute[270693]: 2025-11-24 18:49:29.278 270697 DEBUG nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 24 18:49:29 compute-0 nova_compute[270693]: 2025-11-24 18:49:29.281 270697 INFO nova.virt.libvirt.driver [-] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Instance spawned successfully.
Nov 24 18:49:29 compute-0 nova_compute[270693]: 2025-11-24 18:49:29.282 270697 DEBUG nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 24 18:49:29 compute-0 nova_compute[270693]: 2025-11-24 18:49:29.330 270697 DEBUG nova.compute.manager [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 18:49:29 compute-0 nova_compute[270693]: 2025-11-24 18:49:29.340 270697 DEBUG nova.compute.manager [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 18:49:29 compute-0 nova_compute[270693]: 2025-11-24 18:49:29.345 270697 DEBUG nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 18:49:29 compute-0 nova_compute[270693]: 2025-11-24 18:49:29.346 270697 DEBUG nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 18:49:29 compute-0 nova_compute[270693]: 2025-11-24 18:49:29.347 270697 DEBUG nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 18:49:29 compute-0 nova_compute[270693]: 2025-11-24 18:49:29.348 270697 DEBUG nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 18:49:29 compute-0 nova_compute[270693]: 2025-11-24 18:49:29.349 270697 DEBUG nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 18:49:29 compute-0 nova_compute[270693]: 2025-11-24 18:49:29.350 270697 DEBUG nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 18:49:29 compute-0 nova_compute[270693]: 2025-11-24 18:49:29.363 270697 INFO nova.compute.manager [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 18:49:29 compute-0 nova_compute[270693]: 2025-11-24 18:49:29.364 270697 DEBUG nova.virt.driver [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Emitting event <LifecycleEvent: 1764010169.276756, 81f5edb9-2756-4a6e-bc3a-fa770161d562 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 18:49:29 compute-0 nova_compute[270693]: 2025-11-24 18:49:29.365 270697 INFO nova.compute.manager [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] VM Started (Lifecycle Event)
Nov 24 18:49:29 compute-0 nova_compute[270693]: 2025-11-24 18:49:29.430 270697 DEBUG nova.compute.manager [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 18:49:29 compute-0 nova_compute[270693]: 2025-11-24 18:49:29.433 270697 DEBUG nova.compute.manager [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 18:49:29 compute-0 ceph-mon[74927]: pgmap v1033: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.9 KiB/s wr, 34 op/s
Nov 24 18:49:29 compute-0 ceph-mon[74927]: osdmap e147: 3 total, 3 up, 3 in
Nov 24 18:49:29 compute-0 nova_compute[270693]: 2025-11-24 18:49:29.466 270697 INFO nova.compute.manager [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 18:49:29 compute-0 nova_compute[270693]: 2025-11-24 18:49:29.478 270697 INFO nova.compute.manager [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Took 4.29 seconds to spawn the instance on the hypervisor.
Nov 24 18:49:29 compute-0 nova_compute[270693]: 2025-11-24 18:49:29.479 270697 DEBUG nova.compute.manager [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 18:49:29 compute-0 nova_compute[270693]: 2025-11-24 18:49:29.550 270697 INFO nova.compute.manager [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Took 7.94 seconds to build instance.
Nov 24 18:49:29 compute-0 nova_compute[270693]: 2025-11-24 18:49:29.569 270697 DEBUG oslo_concurrency.lockutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "81f5edb9-2756-4a6e-bc3a-fa770161d562" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:49:29 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1035: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 22 KiB/s wr, 4 op/s
Nov 24 18:49:31 compute-0 ceph-mon[74927]: pgmap v1035: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 22 KiB/s wr, 4 op/s
Nov 24 18:49:31 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1036: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 7.5 KiB/s rd, 19 KiB/s wr, 11 op/s
Nov 24 18:49:32 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Nov 24 18:49:32 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Nov 24 18:49:32 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Nov 24 18:49:32 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:49:32 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Nov 24 18:49:32 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Nov 24 18:49:32 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Nov 24 18:49:33 compute-0 ceph-mon[74927]: pgmap v1036: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 7.5 KiB/s rd, 19 KiB/s wr, 11 op/s
Nov 24 18:49:33 compute-0 ceph-mon[74927]: osdmap e148: 3 total, 3 up, 3 in
Nov 24 18:49:33 compute-0 ceph-mon[74927]: osdmap e149: 3 total, 3 up, 3 in
Nov 24 18:49:33 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1039: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 27 KiB/s wr, 77 op/s
Nov 24 18:49:34 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Nov 24 18:49:34 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Nov 24 18:49:34 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Nov 24 18:49:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:49:34
Nov 24 18:49:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:49:34 compute-0 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 18:49:34 compute-0 ceph-mgr[75218]: [balancer INFO root] pools ['volumes', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'vms', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'backups', 'cephfs.cephfs.data']
Nov 24 18:49:34 compute-0 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 18:49:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:49:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:49:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:49:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:49:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:49:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:49:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:49:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:49:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:49:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:49:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:49:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:49:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:49:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:49:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:49:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:49:35 compute-0 podman[277622]: 2025-11-24 18:49:35.005062082 +0000 UTC m=+0.086354046 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 18:49:35 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Nov 24 18:49:35 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Nov 24 18:49:35 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Nov 24 18:49:35 compute-0 ceph-mon[74927]: pgmap v1039: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 27 KiB/s wr, 77 op/s
Nov 24 18:49:35 compute-0 ceph-mon[74927]: osdmap e150: 3 total, 3 up, 3 in
Nov 24 18:49:35 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1042: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 4.5 KiB/s wr, 130 op/s
Nov 24 18:49:36 compute-0 ceph-mon[74927]: osdmap e151: 3 total, 3 up, 3 in
Nov 24 18:49:36 compute-0 ceph-mon[74927]: pgmap v1042: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 4.5 KiB/s wr, 130 op/s
Nov 24 18:49:36 compute-0 podman[277648]: 2025-11-24 18:49:36.964545384 +0000 UTC m=+0.053941442 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 24 18:49:36 compute-0 podman[277649]: 2025-11-24 18:49:36.973003052 +0000 UTC m=+0.057795987 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 24 18:49:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Nov 24 18:49:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Nov 24 18:49:37 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Nov 24 18:49:37 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1044: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 3.8 KiB/s wr, 110 op/s
Nov 24 18:49:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:49:38 compute-0 ceph-mon[74927]: osdmap e152: 3 total, 3 up, 3 in
Nov 24 18:49:38 compute-0 ceph-mon[74927]: pgmap v1044: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 3.8 KiB/s wr, 110 op/s
Nov 24 18:49:39 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1045: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 4.5 KiB/s wr, 61 op/s
Nov 24 18:49:40 compute-0 ceph-mon[74927]: pgmap v1045: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 4.5 KiB/s wr, 61 op/s
Nov 24 18:49:41 compute-0 nova_compute[270693]: 2025-11-24 18:49:41.534 270697 DEBUG oslo_concurrency.lockutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Acquiring lock "3226af13-afcf-47ff-91b3-2ccec9def10d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:49:41 compute-0 nova_compute[270693]: 2025-11-24 18:49:41.535 270697 DEBUG oslo_concurrency.lockutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "3226af13-afcf-47ff-91b3-2ccec9def10d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:49:41 compute-0 nova_compute[270693]: 2025-11-24 18:49:41.554 270697 DEBUG nova.compute.manager [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 24 18:49:41 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1046: 321 pgs: 321 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 5.0 KiB/s wr, 93 op/s
Nov 24 18:49:41 compute-0 nova_compute[270693]: 2025-11-24 18:49:41.642 270697 DEBUG oslo_concurrency.lockutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:49:41 compute-0 nova_compute[270693]: 2025-11-24 18:49:41.642 270697 DEBUG oslo_concurrency.lockutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:49:41 compute-0 nova_compute[270693]: 2025-11-24 18:49:41.649 270697 DEBUG nova.virt.hardware [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 24 18:49:41 compute-0 nova_compute[270693]: 2025-11-24 18:49:41.649 270697 INFO nova.compute.claims [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Claim successful on node compute-0.ctlplane.example.com
Nov 24 18:49:41 compute-0 nova_compute[270693]: 2025-11-24 18:49:41.793 270697 DEBUG oslo_concurrency.processutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:49:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 18:49:42 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/498490164' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:49:42 compute-0 nova_compute[270693]: 2025-11-24 18:49:42.244 270697 DEBUG oslo_concurrency.processutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:49:42 compute-0 nova_compute[270693]: 2025-11-24 18:49:42.252 270697 DEBUG nova.compute.provider_tree [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Inventory has not changed in ProviderTree for provider: d1cce7ec-de83-4810-91f8-1852891da8a6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 18:49:42 compute-0 nova_compute[270693]: 2025-11-24 18:49:42.277 270697 DEBUG nova.scheduler.client.report [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Inventory has not changed for provider d1cce7ec-de83-4810-91f8-1852891da8a6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 18:49:42 compute-0 nova_compute[270693]: 2025-11-24 18:49:42.302 270697 DEBUG oslo_concurrency.lockutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:49:42 compute-0 nova_compute[270693]: 2025-11-24 18:49:42.303 270697 DEBUG nova.compute.manager [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 24 18:49:42 compute-0 nova_compute[270693]: 2025-11-24 18:49:42.353 270697 DEBUG nova.compute.manager [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 24 18:49:42 compute-0 nova_compute[270693]: 2025-11-24 18:49:42.354 270697 DEBUG nova.network.neutron [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 24 18:49:42 compute-0 nova_compute[270693]: 2025-11-24 18:49:42.375 270697 INFO nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 24 18:49:42 compute-0 nova_compute[270693]: 2025-11-24 18:49:42.392 270697 DEBUG nova.compute.manager [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 24 18:49:42 compute-0 nova_compute[270693]: 2025-11-24 18:49:42.480 270697 DEBUG nova.compute.manager [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 24 18:49:42 compute-0 nova_compute[270693]: 2025-11-24 18:49:42.481 270697 DEBUG nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 24 18:49:42 compute-0 nova_compute[270693]: 2025-11-24 18:49:42.481 270697 INFO nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Creating image(s)
Nov 24 18:49:42 compute-0 nova_compute[270693]: 2025-11-24 18:49:42.507 270697 DEBUG nova.storage.rbd_utils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] rbd image 3226af13-afcf-47ff-91b3-2ccec9def10d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 18:49:42 compute-0 nova_compute[270693]: 2025-11-24 18:49:42.536 270697 DEBUG nova.storage.rbd_utils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] rbd image 3226af13-afcf-47ff-91b3-2ccec9def10d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 18:49:42 compute-0 nova_compute[270693]: 2025-11-24 18:49:42.555 270697 DEBUG nova.storage.rbd_utils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] rbd image 3226af13-afcf-47ff-91b3-2ccec9def10d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 18:49:42 compute-0 nova_compute[270693]: 2025-11-24 18:49:42.557 270697 DEBUG oslo_concurrency.lockutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Acquiring lock "729e6718c1087801824b83fd3da972f8762743ad" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:49:42 compute-0 nova_compute[270693]: 2025-11-24 18:49:42.558 270697 DEBUG oslo_concurrency.lockutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "729e6718c1087801824b83fd3da972f8762743ad" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:49:42 compute-0 ceph-mon[74927]: pgmap v1046: 321 pgs: 321 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 5.0 KiB/s wr, 93 op/s
Nov 24 18:49:42 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/498490164' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:49:42 compute-0 nova_compute[270693]: 2025-11-24 18:49:42.788 270697 DEBUG nova.virt.libvirt.imagebackend [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Image locations are: [{'url': 'rbd://e5ee928f-099b-569b-93c9-ecf025cbb50d/images/e08f0b9d-adb5-48f3-899f-503d3912e516/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://e5ee928f-099b-569b-93c9-ecf025cbb50d/images/e08f0b9d-adb5-48f3-899f-503d3912e516/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 24 18:49:42 compute-0 nova_compute[270693]: 2025-11-24 18:49:42.832 270697 DEBUG nova.virt.libvirt.imagebackend [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Selected location: {'url': 'rbd://e5ee928f-099b-569b-93c9-ecf025cbb50d/images/e08f0b9d-adb5-48f3-899f-503d3912e516/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Nov 24 18:49:42 compute-0 nova_compute[270693]: 2025-11-24 18:49:42.833 270697 DEBUG nova.storage.rbd_utils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] cloning images/e08f0b9d-adb5-48f3-899f-503d3912e516@snap to None/3226af13-afcf-47ff-91b3-2ccec9def10d_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 24 18:49:42 compute-0 nova_compute[270693]: 2025-11-24 18:49:42.875 270697 DEBUG nova.network.neutron [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 24 18:49:42 compute-0 nova_compute[270693]: 2025-11-24 18:49:42.875 270697 DEBUG nova.compute.manager [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 24 18:49:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:49:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Nov 24 18:49:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Nov 24 18:49:42 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Nov 24 18:49:42 compute-0 nova_compute[270693]: 2025-11-24 18:49:42.973 270697 DEBUG oslo_concurrency.lockutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "729e6718c1087801824b83fd3da972f8762743ad" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.415s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:49:43 compute-0 nova_compute[270693]: 2025-11-24 18:49:43.139 270697 DEBUG nova.storage.rbd_utils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] resizing rbd image 3226af13-afcf-47ff-91b3-2ccec9def10d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 24 18:49:43 compute-0 nova_compute[270693]: 2025-11-24 18:49:43.210 270697 DEBUG nova.objects.instance [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lazy-loading 'migration_context' on Instance uuid 3226af13-afcf-47ff-91b3-2ccec9def10d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 18:49:43 compute-0 nova_compute[270693]: 2025-11-24 18:49:43.229 270697 DEBUG nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 24 18:49:43 compute-0 nova_compute[270693]: 2025-11-24 18:49:43.229 270697 DEBUG nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Ensure instance console log exists: /var/lib/nova/instances/3226af13-afcf-47ff-91b3-2ccec9def10d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 24 18:49:43 compute-0 nova_compute[270693]: 2025-11-24 18:49:43.230 270697 DEBUG oslo_concurrency.lockutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:49:43 compute-0 nova_compute[270693]: 2025-11-24 18:49:43.230 270697 DEBUG oslo_concurrency.lockutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:49:43 compute-0 nova_compute[270693]: 2025-11-24 18:49:43.231 270697 DEBUG oslo_concurrency.lockutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:49:43 compute-0 nova_compute[270693]: 2025-11-24 18:49:43.232 270697 DEBUG nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b313313bc424bc1da2fd32d986e790f4',container_format='bare',created_at=2025-11-24T18:49:36Z,direct_url=<?>,disk_format='raw',id=e08f0b9d-adb5-48f3-899f-503d3912e516,min_disk=0,min_ram=0,name='tempest-image-dependency-test-290862993',owner='0d692fe6fe5e446c86fe7152afbbaa17',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-11-24T18:49:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'boot_index': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'encryption_format': None, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'e08f0b9d-adb5-48f3-899f-503d3912e516'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 24 18:49:43 compute-0 nova_compute[270693]: 2025-11-24 18:49:43.236 270697 WARNING nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 18:49:43 compute-0 nova_compute[270693]: 2025-11-24 18:49:43.241 270697 DEBUG nova.virt.libvirt.host [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 24 18:49:43 compute-0 nova_compute[270693]: 2025-11-24 18:49:43.242 270697 DEBUG nova.virt.libvirt.host [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 24 18:49:43 compute-0 nova_compute[270693]: 2025-11-24 18:49:43.245 270697 DEBUG nova.virt.libvirt.host [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 24 18:49:43 compute-0 nova_compute[270693]: 2025-11-24 18:49:43.245 270697 DEBUG nova.virt.libvirt.host [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 24 18:49:43 compute-0 nova_compute[270693]: 2025-11-24 18:49:43.245 270697 DEBUG nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 18:49:43 compute-0 nova_compute[270693]: 2025-11-24 18:49:43.246 270697 DEBUG nova.virt.hardware [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T18:48:11Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fa20e92f-7c52-40ac-838f-32e378b8ec04',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b313313bc424bc1da2fd32d986e790f4',container_format='bare',created_at=2025-11-24T18:49:36Z,direct_url=<?>,disk_format='raw',id=e08f0b9d-adb5-48f3-899f-503d3912e516,min_disk=0,min_ram=0,name='tempest-image-dependency-test-290862993',owner='0d692fe6fe5e446c86fe7152afbbaa17',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-11-24T18:49:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 24 18:49:43 compute-0 nova_compute[270693]: 2025-11-24 18:49:43.246 270697 DEBUG nova.virt.hardware [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 24 18:49:43 compute-0 nova_compute[270693]: 2025-11-24 18:49:43.247 270697 DEBUG nova.virt.hardware [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 24 18:49:43 compute-0 nova_compute[270693]: 2025-11-24 18:49:43.247 270697 DEBUG nova.virt.hardware [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 24 18:49:43 compute-0 nova_compute[270693]: 2025-11-24 18:49:43.247 270697 DEBUG nova.virt.hardware [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 24 18:49:43 compute-0 nova_compute[270693]: 2025-11-24 18:49:43.247 270697 DEBUG nova.virt.hardware [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 24 18:49:43 compute-0 nova_compute[270693]: 2025-11-24 18:49:43.248 270697 DEBUG nova.virt.hardware [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 24 18:49:43 compute-0 nova_compute[270693]: 2025-11-24 18:49:43.248 270697 DEBUG nova.virt.hardware [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 24 18:49:43 compute-0 nova_compute[270693]: 2025-11-24 18:49:43.248 270697 DEBUG nova.virt.hardware [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 24 18:49:43 compute-0 nova_compute[270693]: 2025-11-24 18:49:43.248 270697 DEBUG nova.virt.hardware [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 24 18:49:43 compute-0 nova_compute[270693]: 2025-11-24 18:49:43.249 270697 DEBUG nova.virt.hardware [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 24 18:49:43 compute-0 nova_compute[270693]: 2025-11-24 18:49:43.252 270697 DEBUG oslo_concurrency.processutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:49:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:49:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:49:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:49:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:49:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.480037605000977e-06 of space, bias 1.0, pg target 0.0007440112815002931 quantized to 32 (current 32)
Nov 24 18:49:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:49:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Nov 24 18:49:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:49:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:49:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:49:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006663034365435958 of space, bias 1.0, pg target 0.19989103096307873 quantized to 32 (current 32)
Nov 24 18:49:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:49:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 18:49:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:49:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:49:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:49:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 18:49:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:49:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 18:49:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:49:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:49:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:49:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 18:49:43 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1048: 321 pgs: 321 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 4.0 KiB/s wr, 73 op/s
Nov 24 18:49:43 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 24 18:49:43 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2024916754' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 18:49:43 compute-0 nova_compute[270693]: 2025-11-24 18:49:43.657 270697 DEBUG oslo_concurrency.processutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:49:43 compute-0 nova_compute[270693]: 2025-11-24 18:49:43.678 270697 DEBUG nova.storage.rbd_utils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] rbd image 3226af13-afcf-47ff-91b3-2ccec9def10d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 18:49:43 compute-0 nova_compute[270693]: 2025-11-24 18:49:43.682 270697 DEBUG oslo_concurrency.processutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:49:43 compute-0 ceph-mon[74927]: osdmap e153: 3 total, 3 up, 3 in
Nov 24 18:49:43 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2024916754' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 18:49:44 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 24 18:49:44 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1490985699' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 18:49:44 compute-0 nova_compute[270693]: 2025-11-24 18:49:44.110 270697 DEBUG oslo_concurrency.processutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:49:44 compute-0 nova_compute[270693]: 2025-11-24 18:49:44.112 270697 DEBUG nova.objects.instance [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3226af13-afcf-47ff-91b3-2ccec9def10d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 18:49:44 compute-0 nova_compute[270693]: 2025-11-24 18:49:44.134 270697 DEBUG nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] End _get_guest_xml xml=<domain type="kvm">
Nov 24 18:49:44 compute-0 nova_compute[270693]:   <uuid>3226af13-afcf-47ff-91b3-2ccec9def10d</uuid>
Nov 24 18:49:44 compute-0 nova_compute[270693]:   <name>instance-00000002</name>
Nov 24 18:49:44 compute-0 nova_compute[270693]:   <memory>131072</memory>
Nov 24 18:49:44 compute-0 nova_compute[270693]:   <vcpu>1</vcpu>
Nov 24 18:49:44 compute-0 nova_compute[270693]:   <metadata>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 18:49:44 compute-0 nova_compute[270693]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:       <nova:name>instance-depend-image</nova:name>
Nov 24 18:49:44 compute-0 nova_compute[270693]:       <nova:creationTime>2025-11-24 18:49:43</nova:creationTime>
Nov 24 18:49:44 compute-0 nova_compute[270693]:       <nova:flavor name="m1.nano">
Nov 24 18:49:44 compute-0 nova_compute[270693]:         <nova:memory>128</nova:memory>
Nov 24 18:49:44 compute-0 nova_compute[270693]:         <nova:disk>1</nova:disk>
Nov 24 18:49:44 compute-0 nova_compute[270693]:         <nova:swap>0</nova:swap>
Nov 24 18:49:44 compute-0 nova_compute[270693]:         <nova:ephemeral>0</nova:ephemeral>
Nov 24 18:49:44 compute-0 nova_compute[270693]:         <nova:vcpus>1</nova:vcpus>
Nov 24 18:49:44 compute-0 nova_compute[270693]:       </nova:flavor>
Nov 24 18:49:44 compute-0 nova_compute[270693]:       <nova:owner>
Nov 24 18:49:44 compute-0 nova_compute[270693]:         <nova:user uuid="c5033dc71ef0458982cc0f8121662150">tempest-ImageDependencyTests-981399736-project-member</nova:user>
Nov 24 18:49:44 compute-0 nova_compute[270693]:         <nova:project uuid="0d692fe6fe5e446c86fe7152afbbaa17">tempest-ImageDependencyTests-981399736</nova:project>
Nov 24 18:49:44 compute-0 nova_compute[270693]:       </nova:owner>
Nov 24 18:49:44 compute-0 nova_compute[270693]:       <nova:root type="image" uuid="e08f0b9d-adb5-48f3-899f-503d3912e516"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:       <nova:ports/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     </nova:instance>
Nov 24 18:49:44 compute-0 nova_compute[270693]:   </metadata>
Nov 24 18:49:44 compute-0 nova_compute[270693]:   <sysinfo type="smbios">
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <system>
Nov 24 18:49:44 compute-0 nova_compute[270693]:       <entry name="manufacturer">RDO</entry>
Nov 24 18:49:44 compute-0 nova_compute[270693]:       <entry name="product">OpenStack Compute</entry>
Nov 24 18:49:44 compute-0 nova_compute[270693]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 18:49:44 compute-0 nova_compute[270693]:       <entry name="serial">3226af13-afcf-47ff-91b3-2ccec9def10d</entry>
Nov 24 18:49:44 compute-0 nova_compute[270693]:       <entry name="uuid">3226af13-afcf-47ff-91b3-2ccec9def10d</entry>
Nov 24 18:49:44 compute-0 nova_compute[270693]:       <entry name="family">Virtual Machine</entry>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     </system>
Nov 24 18:49:44 compute-0 nova_compute[270693]:   </sysinfo>
Nov 24 18:49:44 compute-0 nova_compute[270693]:   <os>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <boot dev="hd"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <smbios mode="sysinfo"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:   </os>
Nov 24 18:49:44 compute-0 nova_compute[270693]:   <features>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <acpi/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <apic/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <vmcoreinfo/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:   </features>
Nov 24 18:49:44 compute-0 nova_compute[270693]:   <clock offset="utc">
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <timer name="pit" tickpolicy="delay"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <timer name="hpet" present="no"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:   </clock>
Nov 24 18:49:44 compute-0 nova_compute[270693]:   <cpu mode="host-model" match="exact">
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <topology sockets="1" cores="1" threads="1"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:   </cpu>
Nov 24 18:49:44 compute-0 nova_compute[270693]:   <devices>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <disk type="network" device="disk">
Nov 24 18:49:44 compute-0 nova_compute[270693]:       <driver type="raw" cache="none"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:       <source protocol="rbd" name="vms/3226af13-afcf-47ff-91b3-2ccec9def10d_disk">
Nov 24 18:49:44 compute-0 nova_compute[270693]:         <host name="192.168.122.100" port="6789"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:       </source>
Nov 24 18:49:44 compute-0 nova_compute[270693]:       <auth username="openstack">
Nov 24 18:49:44 compute-0 nova_compute[270693]:         <secret type="ceph" uuid="e5ee928f-099b-569b-93c9-ecf025cbb50d"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:       </auth>
Nov 24 18:49:44 compute-0 nova_compute[270693]:       <target dev="vda" bus="virtio"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     </disk>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <disk type="network" device="cdrom">
Nov 24 18:49:44 compute-0 nova_compute[270693]:       <driver type="raw" cache="none"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:       <source protocol="rbd" name="vms/3226af13-afcf-47ff-91b3-2ccec9def10d_disk.config">
Nov 24 18:49:44 compute-0 nova_compute[270693]:         <host name="192.168.122.100" port="6789"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:       </source>
Nov 24 18:49:44 compute-0 nova_compute[270693]:       <auth username="openstack">
Nov 24 18:49:44 compute-0 nova_compute[270693]:         <secret type="ceph" uuid="e5ee928f-099b-569b-93c9-ecf025cbb50d"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:       </auth>
Nov 24 18:49:44 compute-0 nova_compute[270693]:       <target dev="sda" bus="sata"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     </disk>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <serial type="pty">
Nov 24 18:49:44 compute-0 nova_compute[270693]:       <log file="/var/lib/nova/instances/3226af13-afcf-47ff-91b3-2ccec9def10d/console.log" append="off"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     </serial>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <video>
Nov 24 18:49:44 compute-0 nova_compute[270693]:       <model type="virtio"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     </video>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <input type="tablet" bus="usb"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <rng model="virtio">
Nov 24 18:49:44 compute-0 nova_compute[270693]:       <backend model="random">/dev/urandom</backend>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     </rng>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <controller type="usb" index="0"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     <memballoon model="virtio">
Nov 24 18:49:44 compute-0 nova_compute[270693]:       <stats period="10"/>
Nov 24 18:49:44 compute-0 nova_compute[270693]:     </memballoon>
Nov 24 18:49:44 compute-0 nova_compute[270693]:   </devices>
Nov 24 18:49:44 compute-0 nova_compute[270693]: </domain>
Nov 24 18:49:44 compute-0 nova_compute[270693]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 24 18:49:44 compute-0 nova_compute[270693]: 2025-11-24 18:49:44.191 270697 DEBUG nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 18:49:44 compute-0 nova_compute[270693]: 2025-11-24 18:49:44.192 270697 DEBUG nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 18:49:44 compute-0 nova_compute[270693]: 2025-11-24 18:49:44.192 270697 INFO nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Using config drive
Nov 24 18:49:44 compute-0 nova_compute[270693]: 2025-11-24 18:49:44.215 270697 DEBUG nova.storage.rbd_utils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] rbd image 3226af13-afcf-47ff-91b3-2ccec9def10d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 18:49:44 compute-0 nova_compute[270693]: 2025-11-24 18:49:44.379 270697 INFO nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Creating config drive at /var/lib/nova/instances/3226af13-afcf-47ff-91b3-2ccec9def10d/disk.config
Nov 24 18:49:44 compute-0 nova_compute[270693]: 2025-11-24 18:49:44.386 270697 DEBUG oslo_concurrency.processutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3226af13-afcf-47ff-91b3-2ccec9def10d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxm1dxv00 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:49:44 compute-0 nova_compute[270693]: 2025-11-24 18:49:44.510 270697 DEBUG oslo_concurrency.processutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3226af13-afcf-47ff-91b3-2ccec9def10d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxm1dxv00" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:49:44 compute-0 nova_compute[270693]: 2025-11-24 18:49:44.535 270697 DEBUG nova.storage.rbd_utils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] rbd image 3226af13-afcf-47ff-91b3-2ccec9def10d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 18:49:44 compute-0 nova_compute[270693]: 2025-11-24 18:49:44.538 270697 DEBUG oslo_concurrency.processutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3226af13-afcf-47ff-91b3-2ccec9def10d/disk.config 3226af13-afcf-47ff-91b3-2ccec9def10d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:49:44 compute-0 nova_compute[270693]: 2025-11-24 18:49:44.682 270697 DEBUG oslo_concurrency.processutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3226af13-afcf-47ff-91b3-2ccec9def10d/disk.config 3226af13-afcf-47ff-91b3-2ccec9def10d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:49:44 compute-0 nova_compute[270693]: 2025-11-24 18:49:44.683 270697 INFO nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Deleting local config drive /var/lib/nova/instances/3226af13-afcf-47ff-91b3-2ccec9def10d/disk.config because it was imported into RBD.
Nov 24 18:49:44 compute-0 systemd-machined[232503]: New machine qemu-2-instance-00000002.
Nov 24 18:49:44 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Nov 24 18:49:44 compute-0 ceph-mon[74927]: pgmap v1048: 321 pgs: 321 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 4.0 KiB/s wr, 73 op/s
Nov 24 18:49:44 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1490985699' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 18:49:44 compute-0 nova_compute[270693]: 2025-11-24 18:49:44.996 270697 DEBUG nova.virt.driver [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Emitting event <LifecycleEvent: 1764010184.9956279, 3226af13-afcf-47ff-91b3-2ccec9def10d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 18:49:44 compute-0 nova_compute[270693]: 2025-11-24 18:49:44.996 270697 INFO nova.compute.manager [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] VM Resumed (Lifecycle Event)
Nov 24 18:49:44 compute-0 nova_compute[270693]: 2025-11-24 18:49:44.998 270697 DEBUG nova.compute.manager [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 24 18:49:44 compute-0 nova_compute[270693]: 2025-11-24 18:49:44.998 270697 DEBUG nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 24 18:49:45 compute-0 nova_compute[270693]: 2025-11-24 18:49:45.001 270697 INFO nova.virt.libvirt.driver [-] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Instance spawned successfully.
Nov 24 18:49:45 compute-0 nova_compute[270693]: 2025-11-24 18:49:45.001 270697 DEBUG nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 24 18:49:45 compute-0 nova_compute[270693]: 2025-11-24 18:49:45.019 270697 DEBUG nova.compute.manager [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 18:49:45 compute-0 nova_compute[270693]: 2025-11-24 18:49:45.022 270697 DEBUG nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 18:49:45 compute-0 nova_compute[270693]: 2025-11-24 18:49:45.022 270697 DEBUG nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 18:49:45 compute-0 nova_compute[270693]: 2025-11-24 18:49:45.022 270697 DEBUG nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 18:49:45 compute-0 nova_compute[270693]: 2025-11-24 18:49:45.023 270697 DEBUG nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 18:49:45 compute-0 nova_compute[270693]: 2025-11-24 18:49:45.023 270697 DEBUG nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 18:49:45 compute-0 nova_compute[270693]: 2025-11-24 18:49:45.023 270697 DEBUG nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 18:49:45 compute-0 nova_compute[270693]: 2025-11-24 18:49:45.026 270697 DEBUG nova.compute.manager [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 18:49:45 compute-0 nova_compute[270693]: 2025-11-24 18:49:45.088 270697 INFO nova.compute.manager [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 18:49:45 compute-0 nova_compute[270693]: 2025-11-24 18:49:45.088 270697 DEBUG nova.virt.driver [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Emitting event <LifecycleEvent: 1764010184.9972782, 3226af13-afcf-47ff-91b3-2ccec9def10d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 18:49:45 compute-0 nova_compute[270693]: 2025-11-24 18:49:45.088 270697 INFO nova.compute.manager [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] VM Started (Lifecycle Event)
Nov 24 18:49:45 compute-0 nova_compute[270693]: 2025-11-24 18:49:45.123 270697 DEBUG nova.compute.manager [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 18:49:45 compute-0 nova_compute[270693]: 2025-11-24 18:49:45.124 270697 INFO nova.compute.manager [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Took 2.64 seconds to spawn the instance on the hypervisor.
Nov 24 18:49:45 compute-0 nova_compute[270693]: 2025-11-24 18:49:45.125 270697 DEBUG nova.compute.manager [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 18:49:45 compute-0 nova_compute[270693]: 2025-11-24 18:49:45.135 270697 DEBUG nova.compute.manager [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 18:49:45 compute-0 nova_compute[270693]: 2025-11-24 18:49:45.175 270697 INFO nova.compute.manager [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 18:49:45 compute-0 nova_compute[270693]: 2025-11-24 18:49:45.210 270697 INFO nova.compute.manager [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Took 3.61 seconds to build instance.
Nov 24 18:49:45 compute-0 nova_compute[270693]: 2025-11-24 18:49:45.237 270697 DEBUG oslo_concurrency.lockutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "3226af13-afcf-47ff-91b3-2ccec9def10d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 3.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:49:45 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1049: 321 pgs: 321 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 5.4 KiB/s wr, 119 op/s
Nov 24 18:49:46 compute-0 nova_compute[270693]: 2025-11-24 18:49:46.652 270697 DEBUG nova.compute.manager [None req-9a8507b7-ff88-4ba4-b22b-04c2d47afef2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 18:49:46 compute-0 nova_compute[270693]: 2025-11-24 18:49:46.704 270697 INFO nova.compute.manager [None req-9a8507b7-ff88-4ba4-b22b-04c2d47afef2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] instance snapshotting
Nov 24 18:49:46 compute-0 ceph-mon[74927]: pgmap v1049: 321 pgs: 321 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 5.4 KiB/s wr, 119 op/s
Nov 24 18:49:46 compute-0 nova_compute[270693]: 2025-11-24 18:49:46.962 270697 INFO nova.virt.libvirt.driver [None req-9a8507b7-ff88-4ba4-b22b-04c2d47afef2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Beginning live snapshot process
Nov 24 18:49:47 compute-0 nova_compute[270693]: 2025-11-24 18:49:47.111 270697 DEBUG nova.storage.rbd_utils [None req-9a8507b7-ff88-4ba4-b22b-04c2d47afef2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] creating snapshot(8d9fc30f3e21495986539262bbd5d8d3) on rbd image(3226af13-afcf-47ff-91b3-2ccec9def10d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 24 18:49:47 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1050: 321 pgs: 321 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 4.3 KiB/s wr, 95 op/s
Nov 24 18:49:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:49:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Nov 24 18:49:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Nov 24 18:49:47 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Nov 24 18:49:47 compute-0 nova_compute[270693]: 2025-11-24 18:49:47.996 270697 DEBUG nova.storage.rbd_utils [None req-9a8507b7-ff88-4ba4-b22b-04c2d47afef2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] cloning vms/3226af13-afcf-47ff-91b3-2ccec9def10d_disk@8d9fc30f3e21495986539262bbd5d8d3 to images/d89c4af4-ef58-418d-a436-2c65c07ddebe clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 24 18:49:48 compute-0 nova_compute[270693]: 2025-11-24 18:49:48.114 270697 DEBUG nova.storage.rbd_utils [None req-9a8507b7-ff88-4ba4-b22b-04c2d47afef2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] flattening images/d89c4af4-ef58-418d-a436-2c65c07ddebe flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 24 18:49:48 compute-0 nova_compute[270693]: 2025-11-24 18:49:48.274 270697 DEBUG nova.storage.rbd_utils [None req-9a8507b7-ff88-4ba4-b22b-04c2d47afef2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] removing snapshot(8d9fc30f3e21495986539262bbd5d8d3) on rbd image(3226af13-afcf-47ff-91b3-2ccec9def10d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 24 18:49:48 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Nov 24 18:49:48 compute-0 ceph-mon[74927]: pgmap v1050: 321 pgs: 321 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 4.3 KiB/s wr, 95 op/s
Nov 24 18:49:48 compute-0 ceph-mon[74927]: osdmap e154: 3 total, 3 up, 3 in
Nov 24 18:49:48 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Nov 24 18:49:48 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Nov 24 18:49:49 compute-0 nova_compute[270693]: 2025-11-24 18:49:49.026 270697 DEBUG nova.storage.rbd_utils [None req-9a8507b7-ff88-4ba4-b22b-04c2d47afef2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] creating snapshot(snap) on rbd image(d89c4af4-ef58-418d-a436-2c65c07ddebe) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 24 18:49:49 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1053: 321 pgs: 321 active+clean; 42 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 25 KiB/s wr, 85 op/s
Nov 24 18:49:49 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Nov 24 18:49:49 compute-0 ceph-mon[74927]: osdmap e155: 3 total, 3 up, 3 in
Nov 24 18:49:49 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Nov 24 18:49:49 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Nov 24 18:49:50 compute-0 ceph-mon[74927]: pgmap v1053: 321 pgs: 321 active+clean; 42 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 25 KiB/s wr, 85 op/s
Nov 24 18:49:50 compute-0 ceph-mon[74927]: osdmap e156: 3 total, 3 up, 3 in
Nov 24 18:49:51 compute-0 nova_compute[270693]: 2025-11-24 18:49:51.405 270697 INFO nova.virt.libvirt.driver [None req-9a8507b7-ff88-4ba4-b22b-04c2d47afef2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Snapshot image upload complete
Nov 24 18:49:51 compute-0 nova_compute[270693]: 2025-11-24 18:49:51.406 270697 INFO nova.compute.manager [None req-9a8507b7-ff88-4ba4-b22b-04c2d47afef2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Took 4.70 seconds to snapshot the instance on the hypervisor.
Nov 24 18:49:51 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1055: 321 pgs: 321 active+clean; 42 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 29 KiB/s wr, 128 op/s
Nov 24 18:49:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:49:52 compute-0 ceph-mon[74927]: pgmap v1055: 321 pgs: 321 active+clean; 42 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 29 KiB/s wr, 128 op/s
Nov 24 18:49:53 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1056: 321 pgs: 321 active+clean; 42 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 29 KiB/s wr, 135 op/s
Nov 24 18:49:54 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Nov 24 18:49:54 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Nov 24 18:49:54 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Nov 24 18:49:54 compute-0 nova_compute[270693]: 2025-11-24 18:49:54.908 270697 DEBUG oslo_concurrency.lockutils [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Acquiring lock "3226af13-afcf-47ff-91b3-2ccec9def10d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:49:54 compute-0 nova_compute[270693]: 2025-11-24 18:49:54.908 270697 DEBUG oslo_concurrency.lockutils [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "3226af13-afcf-47ff-91b3-2ccec9def10d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:49:54 compute-0 nova_compute[270693]: 2025-11-24 18:49:54.909 270697 DEBUG oslo_concurrency.lockutils [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Acquiring lock "3226af13-afcf-47ff-91b3-2ccec9def10d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:49:54 compute-0 nova_compute[270693]: 2025-11-24 18:49:54.909 270697 DEBUG oslo_concurrency.lockutils [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "3226af13-afcf-47ff-91b3-2ccec9def10d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:49:54 compute-0 nova_compute[270693]: 2025-11-24 18:49:54.909 270697 DEBUG oslo_concurrency.lockutils [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "3226af13-afcf-47ff-91b3-2ccec9def10d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:49:54 compute-0 nova_compute[270693]: 2025-11-24 18:49:54.910 270697 INFO nova.compute.manager [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Terminating instance
Nov 24 18:49:54 compute-0 nova_compute[270693]: 2025-11-24 18:49:54.911 270697 DEBUG oslo_concurrency.lockutils [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Acquiring lock "refresh_cache-3226af13-afcf-47ff-91b3-2ccec9def10d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 18:49:54 compute-0 nova_compute[270693]: 2025-11-24 18:49:54.911 270697 DEBUG oslo_concurrency.lockutils [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Acquired lock "refresh_cache-3226af13-afcf-47ff-91b3-2ccec9def10d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 18:49:54 compute-0 nova_compute[270693]: 2025-11-24 18:49:54.911 270697 DEBUG nova.network.neutron [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 18:49:55 compute-0 ceph-mon[74927]: pgmap v1056: 321 pgs: 321 active+clean; 42 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 29 KiB/s wr, 135 op/s
Nov 24 18:49:55 compute-0 ceph-mon[74927]: osdmap e157: 3 total, 3 up, 3 in
Nov 24 18:49:55 compute-0 nova_compute[270693]: 2025-11-24 18:49:55.350 270697 DEBUG nova.network.neutron [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 18:49:55 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1058: 321 pgs: 321 active+clean; 42 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 146 KiB/s rd, 7.0 KiB/s wr, 188 op/s
Nov 24 18:49:55 compute-0 nova_compute[270693]: 2025-11-24 18:49:55.650 270697 DEBUG nova.network.neutron [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 18:49:55 compute-0 nova_compute[270693]: 2025-11-24 18:49:55.668 270697 DEBUG oslo_concurrency.lockutils [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Releasing lock "refresh_cache-3226af13-afcf-47ff-91b3-2ccec9def10d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 18:49:55 compute-0 nova_compute[270693]: 2025-11-24 18:49:55.669 270697 DEBUG nova.compute.manager [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 24 18:49:55 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Nov 24 18:49:55 compute-0 systemd-machined[232503]: Machine qemu-2-instance-00000002 terminated.
Nov 24 18:49:55 compute-0 nova_compute[270693]: 2025-11-24 18:49:55.888 270697 INFO nova.virt.libvirt.driver [-] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Instance destroyed successfully.
Nov 24 18:49:55 compute-0 nova_compute[270693]: 2025-11-24 18:49:55.888 270697 DEBUG nova.objects.instance [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lazy-loading 'resources' on Instance uuid 3226af13-afcf-47ff-91b3-2ccec9def10d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 18:49:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Nov 24 18:49:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Nov 24 18:49:57 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Nov 24 18:49:57 compute-0 ceph-mon[74927]: pgmap v1058: 321 pgs: 321 active+clean; 42 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 146 KiB/s rd, 7.0 KiB/s wr, 188 op/s
Nov 24 18:49:57 compute-0 nova_compute[270693]: 2025-11-24 18:49:57.302 270697 INFO nova.virt.libvirt.driver [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Deleting instance files /var/lib/nova/instances/3226af13-afcf-47ff-91b3-2ccec9def10d_del
Nov 24 18:49:57 compute-0 nova_compute[270693]: 2025-11-24 18:49:57.302 270697 INFO nova.virt.libvirt.driver [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Deletion of /var/lib/nova/instances/3226af13-afcf-47ff-91b3-2ccec9def10d_del complete
Nov 24 18:49:57 compute-0 nova_compute[270693]: 2025-11-24 18:49:57.426 270697 DEBUG nova.virt.libvirt.host [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Nov 24 18:49:57 compute-0 nova_compute[270693]: 2025-11-24 18:49:57.427 270697 INFO nova.virt.libvirt.host [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] UEFI support detected
Nov 24 18:49:57 compute-0 nova_compute[270693]: 2025-11-24 18:49:57.429 270697 INFO nova.compute.manager [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Took 1.76 seconds to destroy the instance on the hypervisor.
Nov 24 18:49:57 compute-0 nova_compute[270693]: 2025-11-24 18:49:57.430 270697 DEBUG oslo.service.loopingcall [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 24 18:49:57 compute-0 nova_compute[270693]: 2025-11-24 18:49:57.430 270697 DEBUG nova.compute.manager [-] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 24 18:49:57 compute-0 nova_compute[270693]: 2025-11-24 18:49:57.430 270697 DEBUG nova.network.neutron [-] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 24 18:49:57 compute-0 nova_compute[270693]: 2025-11-24 18:49:57.556 270697 DEBUG nova.network.neutron [-] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 18:49:57 compute-0 nova_compute[270693]: 2025-11-24 18:49:57.580 270697 DEBUG nova.network.neutron [-] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 18:49:57 compute-0 nova_compute[270693]: 2025-11-24 18:49:57.600 270697 INFO nova.compute.manager [-] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Took 0.17 seconds to deallocate network for instance.
Nov 24 18:49:57 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1060: 321 pgs: 321 active+clean; 42 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 2.3 KiB/s wr, 77 op/s
Nov 24 18:49:57 compute-0 nova_compute[270693]: 2025-11-24 18:49:57.664 270697 DEBUG oslo_concurrency.lockutils [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:49:57 compute-0 nova_compute[270693]: 2025-11-24 18:49:57.665 270697 DEBUG oslo_concurrency.lockutils [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:49:57 compute-0 nova_compute[270693]: 2025-11-24 18:49:57.754 270697 DEBUG oslo_concurrency.processutils [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:49:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:49:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Nov 24 18:49:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Nov 24 18:49:57 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Nov 24 18:49:58 compute-0 ceph-mon[74927]: osdmap e158: 3 total, 3 up, 3 in
Nov 24 18:49:58 compute-0 ceph-mon[74927]: osdmap e159: 3 total, 3 up, 3 in
Nov 24 18:49:58 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 18:49:58 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1665544519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:49:58 compute-0 nova_compute[270693]: 2025-11-24 18:49:58.208 270697 DEBUG oslo_concurrency.processutils [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:49:58 compute-0 nova_compute[270693]: 2025-11-24 18:49:58.213 270697 DEBUG nova.compute.provider_tree [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Inventory has not changed in ProviderTree for provider: d1cce7ec-de83-4810-91f8-1852891da8a6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 18:49:58 compute-0 nova_compute[270693]: 2025-11-24 18:49:58.228 270697 DEBUG nova.scheduler.client.report [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Inventory has not changed for provider d1cce7ec-de83-4810-91f8-1852891da8a6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 18:49:58 compute-0 nova_compute[270693]: 2025-11-24 18:49:58.252 270697 DEBUG oslo_concurrency.lockutils [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:49:58 compute-0 nova_compute[270693]: 2025-11-24 18:49:58.277 270697 INFO nova.scheduler.client.report [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Deleted allocations for instance 3226af13-afcf-47ff-91b3-2ccec9def10d
Nov 24 18:49:58 compute-0 nova_compute[270693]: 2025-11-24 18:49:58.347 270697 DEBUG oslo_concurrency.lockutils [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "3226af13-afcf-47ff-91b3-2ccec9def10d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.439s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:49:59 compute-0 ceph-mon[74927]: pgmap v1060: 321 pgs: 321 active+clean; 42 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 2.3 KiB/s wr, 77 op/s
Nov 24 18:49:59 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1665544519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:49:59 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1062: 321 pgs: 321 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 4.8 KiB/s wr, 132 op/s
Nov 24 18:49:59 compute-0 nova_compute[270693]: 2025-11-24 18:49:59.767 270697 DEBUG oslo_concurrency.lockutils [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Acquiring lock "81f5edb9-2756-4a6e-bc3a-fa770161d562" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:49:59 compute-0 nova_compute[270693]: 2025-11-24 18:49:59.767 270697 DEBUG oslo_concurrency.lockutils [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "81f5edb9-2756-4a6e-bc3a-fa770161d562" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:49:59 compute-0 nova_compute[270693]: 2025-11-24 18:49:59.768 270697 DEBUG oslo_concurrency.lockutils [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Acquiring lock "81f5edb9-2756-4a6e-bc3a-fa770161d562-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:49:59 compute-0 nova_compute[270693]: 2025-11-24 18:49:59.768 270697 DEBUG oslo_concurrency.lockutils [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "81f5edb9-2756-4a6e-bc3a-fa770161d562-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:49:59 compute-0 nova_compute[270693]: 2025-11-24 18:49:59.768 270697 DEBUG oslo_concurrency.lockutils [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "81f5edb9-2756-4a6e-bc3a-fa770161d562-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:49:59 compute-0 nova_compute[270693]: 2025-11-24 18:49:59.769 270697 INFO nova.compute.manager [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Terminating instance
Nov 24 18:49:59 compute-0 nova_compute[270693]: 2025-11-24 18:49:59.770 270697 DEBUG oslo_concurrency.lockutils [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Acquiring lock "refresh_cache-81f5edb9-2756-4a6e-bc3a-fa770161d562" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 18:49:59 compute-0 nova_compute[270693]: 2025-11-24 18:49:59.770 270697 DEBUG oslo_concurrency.lockutils [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Acquired lock "refresh_cache-81f5edb9-2756-4a6e-bc3a-fa770161d562" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 18:49:59 compute-0 nova_compute[270693]: 2025-11-24 18:49:59.771 270697 DEBUG nova.network.neutron [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 18:50:00 compute-0 nova_compute[270693]: 2025-11-24 18:50:00.654 270697 DEBUG nova.network.neutron [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 18:50:00 compute-0 nova_compute[270693]: 2025-11-24 18:50:00.904 270697 DEBUG nova.network.neutron [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 18:50:00 compute-0 nova_compute[270693]: 2025-11-24 18:50:00.921 270697 DEBUG oslo_concurrency.lockutils [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Releasing lock "refresh_cache-81f5edb9-2756-4a6e-bc3a-fa770161d562" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 18:50:00 compute-0 nova_compute[270693]: 2025-11-24 18:50:00.921 270697 DEBUG nova.compute.manager [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 24 18:50:00 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Nov 24 18:50:00 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 1.155s CPU time.
Nov 24 18:50:00 compute-0 systemd-machined[232503]: Machine qemu-1-instance-00000001 terminated.
Nov 24 18:50:01 compute-0 ceph-mon[74927]: pgmap v1062: 321 pgs: 321 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 4.8 KiB/s wr, 132 op/s
Nov 24 18:50:01 compute-0 nova_compute[270693]: 2025-11-24 18:50:01.145 270697 INFO nova.virt.libvirt.driver [-] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Instance destroyed successfully.
Nov 24 18:50:01 compute-0 nova_compute[270693]: 2025-11-24 18:50:01.145 270697 DEBUG nova.objects.instance [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lazy-loading 'resources' on Instance uuid 81f5edb9-2756-4a6e-bc3a-fa770161d562 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 18:50:01 compute-0 nova_compute[270693]: 2025-11-24 18:50:01.341 270697 INFO nova.virt.libvirt.driver [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Deleting instance files /var/lib/nova/instances/81f5edb9-2756-4a6e-bc3a-fa770161d562_del
Nov 24 18:50:01 compute-0 nova_compute[270693]: 2025-11-24 18:50:01.342 270697 INFO nova.virt.libvirt.driver [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Deletion of /var/lib/nova/instances/81f5edb9-2756-4a6e-bc3a-fa770161d562_del complete
Nov 24 18:50:01 compute-0 nova_compute[270693]: 2025-11-24 18:50:01.513 270697 INFO nova.compute.manager [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Took 0.59 seconds to destroy the instance on the hypervisor.
Nov 24 18:50:01 compute-0 nova_compute[270693]: 2025-11-24 18:50:01.514 270697 DEBUG oslo.service.loopingcall [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 24 18:50:01 compute-0 nova_compute[270693]: 2025-11-24 18:50:01.514 270697 DEBUG nova.compute.manager [-] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 24 18:50:01 compute-0 nova_compute[270693]: 2025-11-24 18:50:01.515 270697 DEBUG nova.network.neutron [-] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 24 18:50:01 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1063: 321 pgs: 321 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 5.0 KiB/s wr, 126 op/s
Nov 24 18:50:01 compute-0 nova_compute[270693]: 2025-11-24 18:50:01.745 270697 DEBUG nova.network.neutron [-] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 18:50:01 compute-0 nova_compute[270693]: 2025-11-24 18:50:01.755 270697 DEBUG nova.network.neutron [-] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 18:50:01 compute-0 nova_compute[270693]: 2025-11-24 18:50:01.766 270697 INFO nova.compute.manager [-] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Took 0.25 seconds to deallocate network for instance.
Nov 24 18:50:02 compute-0 nova_compute[270693]: 2025-11-24 18:50:02.063 270697 INFO nova.compute.manager [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Took 0.30 seconds to detach 1 volumes for instance.
Nov 24 18:50:02 compute-0 nova_compute[270693]: 2025-11-24 18:50:02.065 270697 DEBUG nova.compute.manager [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Deleting volume: 57bd14c1-40c4-42ca-854f-95f89e621d53 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Nov 24 18:50:02 compute-0 nova_compute[270693]: 2025-11-24 18:50:02.366 270697 DEBUG oslo_concurrency.lockutils [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:50:02 compute-0 nova_compute[270693]: 2025-11-24 18:50:02.367 270697 DEBUG oslo_concurrency.lockutils [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:50:02 compute-0 nova_compute[270693]: 2025-11-24 18:50:02.409 270697 DEBUG oslo_concurrency.processutils [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:50:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 18:50:02 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1715933528' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:50:02 compute-0 nova_compute[270693]: 2025-11-24 18:50:02.847 270697 DEBUG oslo_concurrency.processutils [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:50:02 compute-0 nova_compute[270693]: 2025-11-24 18:50:02.854 270697 DEBUG nova.compute.provider_tree [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Inventory has not changed in ProviderTree for provider: d1cce7ec-de83-4810-91f8-1852891da8a6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 18:50:02 compute-0 nova_compute[270693]: 2025-11-24 18:50:02.883 270697 DEBUG nova.scheduler.client.report [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Inventory has not changed for provider d1cce7ec-de83-4810-91f8-1852891da8a6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 18:50:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:50:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Nov 24 18:50:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Nov 24 18:50:02 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Nov 24 18:50:02 compute-0 nova_compute[270693]: 2025-11-24 18:50:02.919 270697 DEBUG oslo_concurrency.lockutils [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.553s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:50:02 compute-0 nova_compute[270693]: 2025-11-24 18:50:02.962 270697 INFO nova.scheduler.client.report [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Deleted allocations for instance 81f5edb9-2756-4a6e-bc3a-fa770161d562
Nov 24 18:50:03 compute-0 nova_compute[270693]: 2025-11-24 18:50:03.053 270697 DEBUG oslo_concurrency.lockutils [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "81f5edb9-2756-4a6e-bc3a-fa770161d562" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.285s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:50:03 compute-0 ceph-mon[74927]: pgmap v1063: 321 pgs: 321 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 5.0 KiB/s wr, 126 op/s
Nov 24 18:50:03 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1715933528' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:50:03 compute-0 ceph-mon[74927]: osdmap e160: 3 total, 3 up, 3 in
Nov 24 18:50:03 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 18:50:03 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2977381307' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 18:50:03 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 18:50:03 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2977381307' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 18:50:03 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1065: 321 pgs: 321 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 3.8 KiB/s wr, 95 op/s
Nov 24 18:50:04 compute-0 ceph-mon[74927]: from='client.? 192.168.122.10:0/2977381307' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 18:50:04 compute-0 ceph-mon[74927]: from='client.? 192.168.122.10:0/2977381307' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 18:50:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:50:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:50:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:50:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:50:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:50:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:50:05 compute-0 ceph-mon[74927]: pgmap v1065: 321 pgs: 321 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 3.8 KiB/s wr, 95 op/s
Nov 24 18:50:05 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:50:05.500 179763 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:2b:64', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'fa:26:5b:32:fa:ba'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 18:50:05 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:50:05.500 179763 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 18:50:05 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1066: 321 pgs: 321 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 4.4 KiB/s wr, 104 op/s
Nov 24 18:50:06 compute-0 podman[278309]: 2025-11-24 18:50:06.009107507 +0000 UTC m=+0.103150258 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 18:50:06 compute-0 sudo[278337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:50:06 compute-0 sudo[278337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:50:06 compute-0 sudo[278337]: pam_unix(sudo:session): session closed for user root
Nov 24 18:50:06 compute-0 sudo[278362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:50:06 compute-0 sudo[278362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:50:06 compute-0 sudo[278362]: pam_unix(sudo:session): session closed for user root
Nov 24 18:50:06 compute-0 sudo[278387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:50:06 compute-0 sudo[278387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:50:06 compute-0 sudo[278387]: pam_unix(sudo:session): session closed for user root
Nov 24 18:50:06 compute-0 sudo[278412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 24 18:50:06 compute-0 sudo[278412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:50:06 compute-0 sudo[278412]: pam_unix(sudo:session): session closed for user root
Nov 24 18:50:06 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:50:06 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:50:06 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:50:06 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:50:06 compute-0 sudo[278458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:50:06 compute-0 sudo[278458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:50:06 compute-0 sudo[278458]: pam_unix(sudo:session): session closed for user root
Nov 24 18:50:06 compute-0 sudo[278483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:50:06 compute-0 sudo[278483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:50:06 compute-0 sudo[278483]: pam_unix(sudo:session): session closed for user root
Nov 24 18:50:06 compute-0 sudo[278508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:50:06 compute-0 sudo[278508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:50:06 compute-0 sudo[278508]: pam_unix(sudo:session): session closed for user root
Nov 24 18:50:06 compute-0 sudo[278533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 18:50:06 compute-0 sudo[278533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:50:07 compute-0 ceph-mon[74927]: pgmap v1066: 321 pgs: 321 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 4.4 KiB/s wr, 104 op/s
Nov 24 18:50:07 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:50:07 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:50:07 compute-0 sudo[278533]: pam_unix(sudo:session): session closed for user root
Nov 24 18:50:07 compute-0 sudo[278588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:50:07 compute-0 sudo[278588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:50:07 compute-0 sudo[278588]: pam_unix(sudo:session): session closed for user root
Nov 24 18:50:07 compute-0 sudo[278625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:50:07 compute-0 sudo[278625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:50:07 compute-0 podman[278612]: 2025-11-24 18:50:07.620796809 +0000 UTC m=+0.092722313 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:50:07 compute-0 sudo[278625]: pam_unix(sudo:session): session closed for user root
Nov 24 18:50:07 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1067: 321 pgs: 321 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 3.6 KiB/s wr, 85 op/s
Nov 24 18:50:07 compute-0 podman[278613]: 2025-11-24 18:50:07.657307583 +0000 UTC m=+0.127215997 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 18:50:07 compute-0 sudo[278675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:50:07 compute-0 sudo[278675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:50:07 compute-0 sudo[278675]: pam_unix(sudo:session): session closed for user root
Nov 24 18:50:07 compute-0 sudo[278700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Nov 24 18:50:07 compute-0 sudo[278700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:50:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:50:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Nov 24 18:50:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Nov 24 18:50:07 compute-0 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Nov 24 18:50:07 compute-0 sudo[278700]: pam_unix(sudo:session): session closed for user root
Nov 24 18:50:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:50:07 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:50:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:50:08 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:50:08 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:50:08 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:50:08 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:50:08 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:50:08 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:50:08 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:50:08 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev f9ded4e4-0d95-4074-a0b2-e33220d9f186 does not exist
Nov 24 18:50:08 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 3a440a23-1cb6-4080-bba9-84f7731fb822 does not exist
Nov 24 18:50:08 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev c446aba9-f59d-4157-95d9-64378c2f576f does not exist
Nov 24 18:50:08 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 18:50:08 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:50:08 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 18:50:08 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:50:08 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:50:08 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:50:08 compute-0 sudo[278741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:50:08 compute-0 sudo[278741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:50:08 compute-0 sudo[278741]: pam_unix(sudo:session): session closed for user root
Nov 24 18:50:08 compute-0 sudo[278766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:50:08 compute-0 sudo[278766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:50:08 compute-0 sudo[278766]: pam_unix(sudo:session): session closed for user root
Nov 24 18:50:08 compute-0 sudo[278791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:50:08 compute-0 sudo[278791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:50:08 compute-0 sudo[278791]: pam_unix(sudo:session): session closed for user root
Nov 24 18:50:08 compute-0 sudo[278816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 18:50:08 compute-0 sudo[278816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:50:08 compute-0 podman[278881]: 2025-11-24 18:50:08.709715425 +0000 UTC m=+0.042555724 container create 05a8cf76dc3218ed22a87e0041012e43a220fb8dc764eee6c09a03651301fdd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_robinson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:50:08 compute-0 systemd[1]: Started libpod-conmon-05a8cf76dc3218ed22a87e0041012e43a220fb8dc764eee6c09a03651301fdd6.scope.
Nov 24 18:50:08 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:50:08 compute-0 podman[278881]: 2025-11-24 18:50:08.690656648 +0000 UTC m=+0.023496977 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:50:08 compute-0 podman[278881]: 2025-11-24 18:50:08.792151354 +0000 UTC m=+0.124991683 container init 05a8cf76dc3218ed22a87e0041012e43a220fb8dc764eee6c09a03651301fdd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_robinson, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:50:08 compute-0 podman[278881]: 2025-11-24 18:50:08.799178246 +0000 UTC m=+0.132018545 container start 05a8cf76dc3218ed22a87e0041012e43a220fb8dc764eee6c09a03651301fdd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_robinson, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:50:08 compute-0 podman[278881]: 2025-11-24 18:50:08.803206645 +0000 UTC m=+0.136046974 container attach 05a8cf76dc3218ed22a87e0041012e43a220fb8dc764eee6c09a03651301fdd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:50:08 compute-0 loving_robinson[278895]: 167 167
Nov 24 18:50:08 compute-0 systemd[1]: libpod-05a8cf76dc3218ed22a87e0041012e43a220fb8dc764eee6c09a03651301fdd6.scope: Deactivated successfully.
Nov 24 18:50:08 compute-0 conmon[278895]: conmon 05a8cf76dc3218ed22a8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-05a8cf76dc3218ed22a87e0041012e43a220fb8dc764eee6c09a03651301fdd6.scope/container/memory.events
Nov 24 18:50:08 compute-0 podman[278900]: 2025-11-24 18:50:08.853326943 +0000 UTC m=+0.030865387 container died 05a8cf76dc3218ed22a87e0041012e43a220fb8dc764eee6c09a03651301fdd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_robinson, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 18:50:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-11600026154f137cf41b7177070d7db354e48b8f45530556201467b96bb59a7c-merged.mount: Deactivated successfully.
Nov 24 18:50:08 compute-0 podman[278900]: 2025-11-24 18:50:08.908046413 +0000 UTC m=+0.085584847 container remove 05a8cf76dc3218ed22a87e0041012e43a220fb8dc764eee6c09a03651301fdd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 18:50:08 compute-0 ceph-mon[74927]: pgmap v1067: 321 pgs: 321 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 3.6 KiB/s wr, 85 op/s
Nov 24 18:50:08 compute-0 ceph-mon[74927]: osdmap e161: 3 total, 3 up, 3 in
Nov 24 18:50:08 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:50:08 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:50:08 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:50:08 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:50:08 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:50:08 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:50:08 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:50:08 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:50:08 compute-0 systemd[1]: libpod-conmon-05a8cf76dc3218ed22a87e0041012e43a220fb8dc764eee6c09a03651301fdd6.scope: Deactivated successfully.
Nov 24 18:50:09 compute-0 podman[278922]: 2025-11-24 18:50:09.164155107 +0000 UTC m=+0.061485957 container create 2fcad48c8672019af4b97a680a6d4444605581c38ac6cd82bee9bc433d377045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:50:09 compute-0 systemd[1]: Started libpod-conmon-2fcad48c8672019af4b97a680a6d4444605581c38ac6cd82bee9bc433d377045.scope.
Nov 24 18:50:09 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:50:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2941b40273fb6a6c22c67cbf161a06b792aaddef7384e514e04cb115931bfdca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:50:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2941b40273fb6a6c22c67cbf161a06b792aaddef7384e514e04cb115931bfdca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:50:09 compute-0 podman[278922]: 2025-11-24 18:50:09.144047155 +0000 UTC m=+0.041378015 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:50:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2941b40273fb6a6c22c67cbf161a06b792aaddef7384e514e04cb115931bfdca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:50:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2941b40273fb6a6c22c67cbf161a06b792aaddef7384e514e04cb115931bfdca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:50:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2941b40273fb6a6c22c67cbf161a06b792aaddef7384e514e04cb115931bfdca/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:50:09 compute-0 podman[278922]: 2025-11-24 18:50:09.257678758 +0000 UTC m=+0.155009588 container init 2fcad48c8672019af4b97a680a6d4444605581c38ac6cd82bee9bc433d377045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_neumann, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:50:09 compute-0 podman[278922]: 2025-11-24 18:50:09.266213628 +0000 UTC m=+0.163544428 container start 2fcad48c8672019af4b97a680a6d4444605581c38ac6cd82bee9bc433d377045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:50:09 compute-0 podman[278922]: 2025-11-24 18:50:09.270040151 +0000 UTC m=+0.167370991 container attach 2fcad48c8672019af4b97a680a6d4444605581c38ac6cd82bee9bc433d377045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_neumann, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:50:09 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1069: 321 pgs: 321 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 1.7 KiB/s wr, 52 op/s
Nov 24 18:50:10 compute-0 hopeful_neumann[278938]: --> passed data devices: 0 physical, 3 LVM
Nov 24 18:50:10 compute-0 hopeful_neumann[278938]: --> relative data size: 1.0
Nov 24 18:50:10 compute-0 hopeful_neumann[278938]: --> All data devices are unavailable
Nov 24 18:50:10 compute-0 systemd[1]: libpod-2fcad48c8672019af4b97a680a6d4444605581c38ac6cd82bee9bc433d377045.scope: Deactivated successfully.
Nov 24 18:50:10 compute-0 podman[278922]: 2025-11-24 18:50:10.331994756 +0000 UTC m=+1.229325566 container died 2fcad48c8672019af4b97a680a6d4444605581c38ac6cd82bee9bc433d377045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_neumann, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:50:10 compute-0 systemd[1]: libpod-2fcad48c8672019af4b97a680a6d4444605581c38ac6cd82bee9bc433d377045.scope: Consumed 1.012s CPU time.
Nov 24 18:50:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-2941b40273fb6a6c22c67cbf161a06b792aaddef7384e514e04cb115931bfdca-merged.mount: Deactivated successfully.
Nov 24 18:50:10 compute-0 podman[278922]: 2025-11-24 18:50:10.380303389 +0000 UTC m=+1.277634199 container remove 2fcad48c8672019af4b97a680a6d4444605581c38ac6cd82bee9bc433d377045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_neumann, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:50:10 compute-0 systemd[1]: libpod-conmon-2fcad48c8672019af4b97a680a6d4444605581c38ac6cd82bee9bc433d377045.scope: Deactivated successfully.
Nov 24 18:50:10 compute-0 sudo[278816]: pam_unix(sudo:session): session closed for user root
Nov 24 18:50:10 compute-0 sudo[278977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:50:10 compute-0 sudo[278977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:50:10 compute-0 sudo[278977]: pam_unix(sudo:session): session closed for user root
Nov 24 18:50:10 compute-0 sudo[279002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:50:10 compute-0 sudo[279002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:50:10 compute-0 sudo[279002]: pam_unix(sudo:session): session closed for user root
Nov 24 18:50:10 compute-0 sudo[279027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:50:10 compute-0 sudo[279027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:50:10 compute-0 sudo[279027]: pam_unix(sudo:session): session closed for user root
Nov 24 18:50:10 compute-0 sudo[279052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- lvm list --format json
Nov 24 18:50:10 compute-0 sudo[279052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:50:10 compute-0 nova_compute[270693]: 2025-11-24 18:50:10.886 270697 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764010195.8853695, 3226af13-afcf-47ff-91b3-2ccec9def10d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 18:50:10 compute-0 nova_compute[270693]: 2025-11-24 18:50:10.888 270697 INFO nova.compute.manager [-] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] VM Stopped (Lifecycle Event)
Nov 24 18:50:10 compute-0 nova_compute[270693]: 2025-11-24 18:50:10.917 270697 DEBUG nova.compute.manager [None req-81330878-ede8-46cb-a9d0-6ae08f3ac908 - - - - - -] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 18:50:10 compute-0 ceph-mon[74927]: pgmap v1069: 321 pgs: 321 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 1.7 KiB/s wr, 52 op/s
Nov 24 18:50:10 compute-0 podman[279118]: 2025-11-24 18:50:10.968388406 +0000 UTC m=+0.054260731 container create 27cac7ac9827a0e0ad12ba6676aa769d4ff83f2f780c38c7b70d146967562e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 18:50:11 compute-0 systemd[1]: Started libpod-conmon-27cac7ac9827a0e0ad12ba6676aa769d4ff83f2f780c38c7b70d146967562e30.scope.
Nov 24 18:50:11 compute-0 podman[279118]: 2025-11-24 18:50:10.943139217 +0000 UTC m=+0.029011602 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:50:11 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:50:11 compute-0 podman[279118]: 2025-11-24 18:50:11.063431694 +0000 UTC m=+0.149304059 container init 27cac7ac9827a0e0ad12ba6676aa769d4ff83f2f780c38c7b70d146967562e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaplygin, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 24 18:50:11 compute-0 podman[279118]: 2025-11-24 18:50:11.074933046 +0000 UTC m=+0.160805371 container start 27cac7ac9827a0e0ad12ba6676aa769d4ff83f2f780c38c7b70d146967562e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaplygin, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 18:50:11 compute-0 gracious_chaplygin[279135]: 167 167
Nov 24 18:50:11 compute-0 systemd[1]: libpod-27cac7ac9827a0e0ad12ba6676aa769d4ff83f2f780c38c7b70d146967562e30.scope: Deactivated successfully.
Nov 24 18:50:11 compute-0 podman[279118]: 2025-11-24 18:50:11.08204936 +0000 UTC m=+0.167921745 container attach 27cac7ac9827a0e0ad12ba6676aa769d4ff83f2f780c38c7b70d146967562e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaplygin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:50:11 compute-0 podman[279118]: 2025-11-24 18:50:11.083361242 +0000 UTC m=+0.169233597 container died 27cac7ac9827a0e0ad12ba6676aa769d4ff83f2f780c38c7b70d146967562e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaplygin, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 18:50:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1bbd07b927dc7c5b69878d508cccb5f5ce5b59b4278b947f24b66b975fde0b1-merged.mount: Deactivated successfully.
Nov 24 18:50:11 compute-0 podman[279118]: 2025-11-24 18:50:11.132468185 +0000 UTC m=+0.218340500 container remove 27cac7ac9827a0e0ad12ba6676aa769d4ff83f2f780c38c7b70d146967562e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaplygin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:50:11 compute-0 systemd[1]: libpod-conmon-27cac7ac9827a0e0ad12ba6676aa769d4ff83f2f780c38c7b70d146967562e30.scope: Deactivated successfully.
Nov 24 18:50:11 compute-0 podman[279159]: 2025-11-24 18:50:11.323578007 +0000 UTC m=+0.050840196 container create 817fd049a6ceecfe37cab921184d247277a5b9c1e86f941f3572bbcfb6bf4921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:50:11 compute-0 systemd[1]: Started libpod-conmon-817fd049a6ceecfe37cab921184d247277a5b9c1e86f941f3572bbcfb6bf4921.scope.
Nov 24 18:50:11 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:50:11 compute-0 podman[279159]: 2025-11-24 18:50:11.29756449 +0000 UTC m=+0.024826749 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:50:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50e5f81d7ea47eb4cf85ebe4604f5e2657b2d52b4854963f4c9ccc8392c3a89b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:50:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50e5f81d7ea47eb4cf85ebe4604f5e2657b2d52b4854963f4c9ccc8392c3a89b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:50:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50e5f81d7ea47eb4cf85ebe4604f5e2657b2d52b4854963f4c9ccc8392c3a89b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:50:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50e5f81d7ea47eb4cf85ebe4604f5e2657b2d52b4854963f4c9ccc8392c3a89b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:50:11 compute-0 podman[279159]: 2025-11-24 18:50:11.418438941 +0000 UTC m=+0.145701260 container init 817fd049a6ceecfe37cab921184d247277a5b9c1e86f941f3572bbcfb6bf4921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 24 18:50:11 compute-0 podman[279159]: 2025-11-24 18:50:11.430468156 +0000 UTC m=+0.157730375 container start 817fd049a6ceecfe37cab921184d247277a5b9c1e86f941f3572bbcfb6bf4921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mirzakhani, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 18:50:11 compute-0 podman[279159]: 2025-11-24 18:50:11.436410971 +0000 UTC m=+0.163673190 container attach 817fd049a6ceecfe37cab921184d247277a5b9c1e86f941f3572bbcfb6bf4921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mirzakhani, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:50:11 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:50:11.502 179763 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=302e9f34-0427-4ff9-a29b-2fc7b5250666, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 18:50:11 compute-0 nova_compute[270693]: 2025-11-24 18:50:11.530 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:50:11 compute-0 nova_compute[270693]: 2025-11-24 18:50:11.530 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 18:50:11 compute-0 nova_compute[270693]: 2025-11-24 18:50:11.530 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 18:50:11 compute-0 nova_compute[270693]: 2025-11-24 18:50:11.553 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 18:50:11 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1070: 321 pgs: 321 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.6 KiB/s wr, 48 op/s
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]: {
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:     "0": [
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:         {
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:             "devices": [
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:                 "/dev/loop3"
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:             ],
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:             "lv_name": "ceph_lv0",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:             "lv_size": "21470642176",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:             "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:             "name": "ceph_lv0",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:             "tags": {
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:                 "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:                 "ceph.cluster_name": "ceph",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:                 "ceph.crush_device_class": "",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:                 "ceph.encrypted": "0",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:                 "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:                 "ceph.osd_id": "0",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:                 "ceph.type": "block",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:                 "ceph.vdo": "0"
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:             },
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:             "type": "block",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:             "vg_name": "ceph_vg0"
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:         }
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:     ],
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:     "1": [
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:         {
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:             "devices": [
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:                 "/dev/loop4"
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:             ],
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:             "lv_name": "ceph_lv1",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:             "lv_size": "21470642176",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:             "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:             "name": "ceph_lv1",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:             "tags": {
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:                 "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:                 "ceph.cluster_name": "ceph",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:                 "ceph.crush_device_class": "",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:                 "ceph.encrypted": "0",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:                 "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:                 "ceph.osd_id": "1",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:                 "ceph.type": "block",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:                 "ceph.vdo": "0"
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:             },
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:             "type": "block",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:             "vg_name": "ceph_vg1"
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:         }
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:     ],
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:     "2": [
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:         {
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:             "devices": [
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:                 "/dev/loop5"
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:             ],
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:             "lv_name": "ceph_lv2",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:             "lv_size": "21470642176",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:             "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:             "name": "ceph_lv2",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:             "tags": {
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:                 "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:                 "ceph.cluster_name": "ceph",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:                 "ceph.crush_device_class": "",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:                 "ceph.encrypted": "0",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:                 "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:                 "ceph.osd_id": "2",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:                 "ceph.type": "block",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:                 "ceph.vdo": "0"
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:             },
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:             "type": "block",
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:             "vg_name": "ceph_vg2"
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:         }
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]:     ]
Nov 24 18:50:12 compute-0 reverent_mirzakhani[279175]: }
Nov 24 18:50:12 compute-0 systemd[1]: libpod-817fd049a6ceecfe37cab921184d247277a5b9c1e86f941f3572bbcfb6bf4921.scope: Deactivated successfully.
Nov 24 18:50:12 compute-0 podman[279159]: 2025-11-24 18:50:12.163416541 +0000 UTC m=+0.890678730 container died 817fd049a6ceecfe37cab921184d247277a5b9c1e86f941f3572bbcfb6bf4921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:50:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-50e5f81d7ea47eb4cf85ebe4604f5e2657b2d52b4854963f4c9ccc8392c3a89b-merged.mount: Deactivated successfully.
Nov 24 18:50:12 compute-0 podman[279159]: 2025-11-24 18:50:12.213951709 +0000 UTC m=+0.941213888 container remove 817fd049a6ceecfe37cab921184d247277a5b9c1e86f941f3572bbcfb6bf4921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mirzakhani, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 18:50:12 compute-0 systemd[1]: libpod-conmon-817fd049a6ceecfe37cab921184d247277a5b9c1e86f941f3572bbcfb6bf4921.scope: Deactivated successfully.
Nov 24 18:50:12 compute-0 sudo[279052]: pam_unix(sudo:session): session closed for user root
Nov 24 18:50:12 compute-0 sudo[279196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:50:12 compute-0 sudo[279196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:50:12 compute-0 sudo[279196]: pam_unix(sudo:session): session closed for user root
Nov 24 18:50:12 compute-0 sudo[279221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:50:12 compute-0 sudo[279221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:50:12 compute-0 sudo[279221]: pam_unix(sudo:session): session closed for user root
Nov 24 18:50:12 compute-0 sudo[279246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:50:12 compute-0 sudo[279246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:50:12 compute-0 sudo[279246]: pam_unix(sudo:session): session closed for user root
Nov 24 18:50:12 compute-0 sudo[279271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- raw list --format json
Nov 24 18:50:12 compute-0 sudo[279271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:50:12 compute-0 nova_compute[270693]: 2025-11-24 18:50:12.570 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:50:12 compute-0 nova_compute[270693]: 2025-11-24 18:50:12.571 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:50:12 compute-0 nova_compute[270693]: 2025-11-24 18:50:12.571 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 18:50:12 compute-0 nova_compute[270693]: 2025-11-24 18:50:12.571 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:50:12 compute-0 nova_compute[270693]: 2025-11-24 18:50:12.599 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:50:12 compute-0 nova_compute[270693]: 2025-11-24 18:50:12.600 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:50:12 compute-0 nova_compute[270693]: 2025-11-24 18:50:12.601 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:50:12 compute-0 nova_compute[270693]: 2025-11-24 18:50:12.601 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 18:50:12 compute-0 nova_compute[270693]: 2025-11-24 18:50:12.601 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:50:12 compute-0 podman[279356]: 2025-11-24 18:50:12.880096438 +0000 UTC m=+0.058314410 container create 0b324e4c86f1dc41b1a626d0a2af9c859b41a4b395377a04c250a236e6a6381a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:50:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:50:12 compute-0 ceph-mon[74927]: pgmap v1070: 321 pgs: 321 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.6 KiB/s wr, 48 op/s
Nov 24 18:50:12 compute-0 systemd[1]: Started libpod-conmon-0b324e4c86f1dc41b1a626d0a2af9c859b41a4b395377a04c250a236e6a6381a.scope.
Nov 24 18:50:12 compute-0 podman[279356]: 2025-11-24 18:50:12.844519086 +0000 UTC m=+0.022737088 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:50:12 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:50:12 compute-0 podman[279356]: 2025-11-24 18:50:12.977556475 +0000 UTC m=+0.155774517 container init 0b324e4c86f1dc41b1a626d0a2af9c859b41a4b395377a04c250a236e6a6381a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 18:50:12 compute-0 podman[279356]: 2025-11-24 18:50:12.985836118 +0000 UTC m=+0.164054050 container start 0b324e4c86f1dc41b1a626d0a2af9c859b41a4b395377a04c250a236e6a6381a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_chaplygin, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:50:12 compute-0 serene_chaplygin[279372]: 167 167
Nov 24 18:50:12 compute-0 systemd[1]: libpod-0b324e4c86f1dc41b1a626d0a2af9c859b41a4b395377a04c250a236e6a6381a.scope: Deactivated successfully.
Nov 24 18:50:12 compute-0 podman[279356]: 2025-11-24 18:50:12.990373769 +0000 UTC m=+0.168591811 container attach 0b324e4c86f1dc41b1a626d0a2af9c859b41a4b395377a04c250a236e6a6381a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_chaplygin, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:50:12 compute-0 podman[279356]: 2025-11-24 18:50:12.994639464 +0000 UTC m=+0.172857386 container died 0b324e4c86f1dc41b1a626d0a2af9c859b41a4b395377a04c250a236e6a6381a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:50:13 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 18:50:13 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2772554347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:50:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-262e5034cf47a1cfbdf26c13650d6788696e4d78d5d190ab8659f58394862cc2-merged.mount: Deactivated successfully.
Nov 24 18:50:13 compute-0 nova_compute[270693]: 2025-11-24 18:50:13.024 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:50:13 compute-0 podman[279356]: 2025-11-24 18:50:13.033290381 +0000 UTC m=+0.211508343 container remove 0b324e4c86f1dc41b1a626d0a2af9c859b41a4b395377a04c250a236e6a6381a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:50:13 compute-0 systemd[1]: libpod-conmon-0b324e4c86f1dc41b1a626d0a2af9c859b41a4b395377a04c250a236e6a6381a.scope: Deactivated successfully.
Nov 24 18:50:13 compute-0 nova_compute[270693]: 2025-11-24 18:50:13.188 270697 WARNING nova.virt.libvirt.driver [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 18:50:13 compute-0 nova_compute[270693]: 2025-11-24 18:50:13.190 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5056MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 18:50:13 compute-0 nova_compute[270693]: 2025-11-24 18:50:13.190 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:50:13 compute-0 nova_compute[270693]: 2025-11-24 18:50:13.191 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:50:13 compute-0 podman[279399]: 2025-11-24 18:50:13.212587263 +0000 UTC m=+0.043765103 container create d141513bee128be76f8cfd47b9c7a5e812fd52f1cc26d9c2c5b815ef973431bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_germain, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:50:13 compute-0 systemd[1]: Started libpod-conmon-d141513bee128be76f8cfd47b9c7a5e812fd52f1cc26d9c2c5b815ef973431bb.scope.
Nov 24 18:50:13 compute-0 nova_compute[270693]: 2025-11-24 18:50:13.254 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 18:50:13 compute-0 nova_compute[270693]: 2025-11-24 18:50:13.254 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 18:50:13 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:50:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13ed4b036e63f4104d34d27ab7867a9216708831b570220b0c98c49ea749d57c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:50:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13ed4b036e63f4104d34d27ab7867a9216708831b570220b0c98c49ea749d57c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:50:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13ed4b036e63f4104d34d27ab7867a9216708831b570220b0c98c49ea749d57c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:50:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13ed4b036e63f4104d34d27ab7867a9216708831b570220b0c98c49ea749d57c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:50:13 compute-0 podman[279399]: 2025-11-24 18:50:13.274602212 +0000 UTC m=+0.105780062 container init d141513bee128be76f8cfd47b9c7a5e812fd52f1cc26d9c2c5b815ef973431bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_germain, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 18:50:13 compute-0 nova_compute[270693]: 2025-11-24 18:50:13.274 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:50:13 compute-0 podman[279399]: 2025-11-24 18:50:13.286813771 +0000 UTC m=+0.117991621 container start d141513bee128be76f8cfd47b9c7a5e812fd52f1cc26d9c2c5b815ef973431bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 24 18:50:13 compute-0 podman[279399]: 2025-11-24 18:50:13.195697409 +0000 UTC m=+0.026875279 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:50:13 compute-0 podman[279399]: 2025-11-24 18:50:13.290042581 +0000 UTC m=+0.121220431 container attach d141513bee128be76f8cfd47b9c7a5e812fd52f1cc26d9c2c5b815ef973431bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_germain, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 18:50:13 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1071: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 1023 B/s wr, 20 op/s
Nov 24 18:50:13 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 18:50:13 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1820872512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:50:13 compute-0 nova_compute[270693]: 2025-11-24 18:50:13.661 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.386s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:50:13 compute-0 nova_compute[270693]: 2025-11-24 18:50:13.669 270697 DEBUG nova.compute.provider_tree [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed in ProviderTree for provider: d1cce7ec-de83-4810-91f8-1852891da8a6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 18:50:13 compute-0 nova_compute[270693]: 2025-11-24 18:50:13.694 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed for provider d1cce7ec-de83-4810-91f8-1852891da8a6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 18:50:13 compute-0 nova_compute[270693]: 2025-11-24 18:50:13.723 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 18:50:13 compute-0 nova_compute[270693]: 2025-11-24 18:50:13.723 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.533s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:50:13 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2772554347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:50:13 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1820872512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:50:14 compute-0 interesting_germain[279415]: {
Nov 24 18:50:14 compute-0 interesting_germain[279415]:     "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 18:50:14 compute-0 interesting_germain[279415]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:50:14 compute-0 interesting_germain[279415]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 18:50:14 compute-0 interesting_germain[279415]:         "osd_id": 0,
Nov 24 18:50:14 compute-0 interesting_germain[279415]:         "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:50:14 compute-0 interesting_germain[279415]:         "type": "bluestore"
Nov 24 18:50:14 compute-0 interesting_germain[279415]:     },
Nov 24 18:50:14 compute-0 interesting_germain[279415]:     "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 18:50:14 compute-0 interesting_germain[279415]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:50:14 compute-0 interesting_germain[279415]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 18:50:14 compute-0 interesting_germain[279415]:         "osd_id": 1,
Nov 24 18:50:14 compute-0 interesting_germain[279415]:         "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:50:14 compute-0 interesting_germain[279415]:         "type": "bluestore"
Nov 24 18:50:14 compute-0 interesting_germain[279415]:     },
Nov 24 18:50:14 compute-0 interesting_germain[279415]:     "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 18:50:14 compute-0 interesting_germain[279415]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:50:14 compute-0 interesting_germain[279415]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 18:50:14 compute-0 interesting_germain[279415]:         "osd_id": 2,
Nov 24 18:50:14 compute-0 interesting_germain[279415]:         "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:50:14 compute-0 interesting_germain[279415]:         "type": "bluestore"
Nov 24 18:50:14 compute-0 interesting_germain[279415]:     }
Nov 24 18:50:14 compute-0 interesting_germain[279415]: }
Nov 24 18:50:14 compute-0 systemd[1]: libpod-d141513bee128be76f8cfd47b9c7a5e812fd52f1cc26d9c2c5b815ef973431bb.scope: Deactivated successfully.
Nov 24 18:50:14 compute-0 podman[279399]: 2025-11-24 18:50:14.269225067 +0000 UTC m=+1.100402907 container died d141513bee128be76f8cfd47b9c7a5e812fd52f1cc26d9c2c5b815ef973431bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_germain, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:50:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-13ed4b036e63f4104d34d27ab7867a9216708831b570220b0c98c49ea749d57c-merged.mount: Deactivated successfully.
Nov 24 18:50:14 compute-0 podman[279399]: 2025-11-24 18:50:14.31628255 +0000 UTC m=+1.147460390 container remove d141513bee128be76f8cfd47b9c7a5e812fd52f1cc26d9c2c5b815ef973431bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_germain, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 18:50:14 compute-0 systemd[1]: libpod-conmon-d141513bee128be76f8cfd47b9c7a5e812fd52f1cc26d9c2c5b815ef973431bb.scope: Deactivated successfully.
Nov 24 18:50:14 compute-0 sudo[279271]: pam_unix(sudo:session): session closed for user root
Nov 24 18:50:14 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:50:14 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:50:14 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:50:14 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:50:14 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev c9ba5153-d2c9-47f9-aa83-cce357d7f708 does not exist
Nov 24 18:50:14 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 526395e9-cc88-48bd-ac3d-c2ca670201e0 does not exist
Nov 24 18:50:14 compute-0 sudo[279480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:50:14 compute-0 sudo[279480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:50:14 compute-0 sudo[279480]: pam_unix(sudo:session): session closed for user root
Nov 24 18:50:14 compute-0 sudo[279505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:50:14 compute-0 sudo[279505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:50:14 compute-0 sudo[279505]: pam_unix(sudo:session): session closed for user root
Nov 24 18:50:14 compute-0 nova_compute[270693]: 2025-11-24 18:50:14.681 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:50:14 compute-0 nova_compute[270693]: 2025-11-24 18:50:14.682 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:50:15 compute-0 ceph-mon[74927]: pgmap v1071: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 1023 B/s wr, 20 op/s
Nov 24 18:50:15 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:50:15 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:50:15 compute-0 nova_compute[270693]: 2025-11-24 18:50:15.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:50:15 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1072: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:16 compute-0 nova_compute[270693]: 2025-11-24 18:50:16.144 270697 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764010201.142717, 81f5edb9-2756-4a6e-bc3a-fa770161d562 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 18:50:16 compute-0 nova_compute[270693]: 2025-11-24 18:50:16.145 270697 INFO nova.compute.manager [-] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] VM Stopped (Lifecycle Event)
Nov 24 18:50:16 compute-0 nova_compute[270693]: 2025-11-24 18:50:16.170 270697 DEBUG nova.compute.manager [None req-032a844a-59f2-47a3-a8ff-742d66f3edfc - - - - - -] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 18:50:16 compute-0 nova_compute[270693]: 2025-11-24 18:50:16.530 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:50:16 compute-0 nova_compute[270693]: 2025-11-24 18:50:16.531 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:50:17 compute-0 ceph-mon[74927]: pgmap v1072: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:17 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1073: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:50:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 18:50:19 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3451871078' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 18:50:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 18:50:19 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3451871078' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 18:50:19 compute-0 ceph-mon[74927]: pgmap v1073: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:19 compute-0 ceph-mon[74927]: from='client.? 192.168.122.10:0/3451871078' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 18:50:19 compute-0 ceph-mon[74927]: from='client.? 192.168.122.10:0/3451871078' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 18:50:19 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1074: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:21 compute-0 ceph-mon[74927]: pgmap v1074: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:21 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1075: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:50:22.744 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:50:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:50:22.745 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:50:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:50:22.745 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:50:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:50:23 compute-0 ceph-mon[74927]: pgmap v1075: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:23 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1076: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:25 compute-0 ceph-mon[74927]: pgmap v1076: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:25 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1077: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:27 compute-0 ceph-mon[74927]: pgmap v1077: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:27 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1078: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:50:29 compute-0 ceph-mon[74927]: pgmap v1078: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:29 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1079: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:31 compute-0 ceph-mon[74927]: pgmap v1079: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:31 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1080: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:32 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:50:33 compute-0 ceph-mon[74927]: pgmap v1080: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:33 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1081: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:33 compute-0 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 18:50:33 compute-0 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.2 total, 600.0 interval
                                           Cumulative writes: 6867 writes, 27K keys, 6867 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6867 writes, 1384 syncs, 4.96 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1285 writes, 3543 keys, 1285 commit groups, 1.0 writes per commit group, ingest: 1.95 MB, 0.00 MB/s
                                           Interval WAL: 1285 writes, 527 syncs, 2.44 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 18:50:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:50:34
Nov 24 18:50:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:50:34 compute-0 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 18:50:34 compute-0 ceph-mgr[75218]: [balancer INFO root] pools ['vms', 'images', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', '.mgr', 'default.rgw.log']
Nov 24 18:50:34 compute-0 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 18:50:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:50:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:50:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:50:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:50:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:50:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:50:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:50:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:50:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:50:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:50:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:50:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:50:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:50:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:50:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:50:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:50:35 compute-0 ceph-mon[74927]: pgmap v1081: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:35 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1082: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:37 compute-0 podman[279530]: 2025-11-24 18:50:37.007618867 +0000 UTC m=+0.080083153 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 18:50:37 compute-0 ceph-mon[74927]: pgmap v1082: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:37 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1083: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:50:37 compute-0 podman[279555]: 2025-11-24 18:50:37.959624169 +0000 UTC m=+0.051528694 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 24 18:50:37 compute-0 podman[279556]: 2025-11-24 18:50:37.972656918 +0000 UTC m=+0.062772849 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 18:50:39 compute-0 ceph-mon[74927]: pgmap v1083: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:39 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1084: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:40 compute-0 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 18:50:40 compute-0 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1801.0 total, 600.0 interval
                                           Cumulative writes: 8591 writes, 32K keys, 8591 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 8591 writes, 2012 syncs, 4.27 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1906 writes, 4972 keys, 1906 commit groups, 1.0 writes per commit group, ingest: 2.41 MB, 0.00 MB/s
                                           Interval WAL: 1906 writes, 803 syncs, 2.37 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 18:50:41 compute-0 ceph-mon[74927]: pgmap v1084: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:41 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1085: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:50:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:50:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:50:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:50:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:50:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 18:50:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:50:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 18:50:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:50:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:50:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:50:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 18:50:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:50:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 18:50:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:50:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:50:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:50:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 18:50:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:50:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 18:50:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:50:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:50:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:50:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 18:50:43 compute-0 ceph-mon[74927]: pgmap v1085: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:43 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1086: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:44 compute-0 ceph-mon[74927]: pgmap v1086: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:45 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1087: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:46 compute-0 ceph-mon[74927]: pgmap v1087: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:47 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1088: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:50:47 compute-0 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 18:50:47 compute-0 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.2 total, 600.0 interval
                                           Cumulative writes: 7658 writes, 29K keys, 7658 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7658 writes, 1723 syncs, 4.44 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1996 writes, 5287 keys, 1996 commit groups, 1.0 writes per commit group, ingest: 2.75 MB, 0.00 MB/s
                                           Interval WAL: 1996 writes, 864 syncs, 2.31 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 18:50:48 compute-0 ceph-mon[74927]: pgmap v1088: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:49 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1089: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:50 compute-0 ceph-mgr[75218]: [devicehealth INFO root] Check health
Nov 24 18:50:50 compute-0 ceph-mon[74927]: pgmap v1089: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:51 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1090: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:52 compute-0 ceph-mon[74927]: pgmap v1090: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:50:53 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1091: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:54 compute-0 ceph-mon[74927]: pgmap v1091: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:55 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1092: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:56 compute-0 ceph-mon[74927]: pgmap v1092: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:57 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1093: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:50:58 compute-0 ceph-mon[74927]: pgmap v1093: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:50:59 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1094: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:00 compute-0 ceph-mon[74927]: pgmap v1094: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:01 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1095: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:02 compute-0 ceph-mon[74927]: pgmap v1095: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:51:03 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1096: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:51:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:51:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:51:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:51:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:51:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:51:04 compute-0 ceph-mon[74927]: pgmap v1096: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:05 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1097: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:06 compute-0 ceph-mon[74927]: pgmap v1097: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:07 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1098: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:51:07 compute-0 podman[279596]: 2025-11-24 18:51:07.976278292 +0000 UTC m=+0.076632844 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 24 18:51:08 compute-0 podman[279622]: 2025-11-24 18:51:08.049883411 +0000 UTC m=+0.047199191 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 24 18:51:08 compute-0 podman[279623]: 2025-11-24 18:51:08.121379328 +0000 UTC m=+0.101491185 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 24 18:51:08 compute-0 ceph-mon[74927]: pgmap v1098: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:09 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1099: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:10 compute-0 ceph-mon[74927]: pgmap v1099: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:11 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1100: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:12 compute-0 nova_compute[270693]: 2025-11-24 18:51:12.528 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:51:12 compute-0 nova_compute[270693]: 2025-11-24 18:51:12.529 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 18:51:12 compute-0 nova_compute[270693]: 2025-11-24 18:51:12.529 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 18:51:12 compute-0 nova_compute[270693]: 2025-11-24 18:51:12.542 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 18:51:12 compute-0 nova_compute[270693]: 2025-11-24 18:51:12.543 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:51:12 compute-0 nova_compute[270693]: 2025-11-24 18:51:12.565 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:51:12 compute-0 nova_compute[270693]: 2025-11-24 18:51:12.565 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:51:12 compute-0 nova_compute[270693]: 2025-11-24 18:51:12.565 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:51:12 compute-0 nova_compute[270693]: 2025-11-24 18:51:12.566 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 18:51:12 compute-0 nova_compute[270693]: 2025-11-24 18:51:12.566 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:51:12 compute-0 ceph-mon[74927]: pgmap v1100: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:51:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 18:51:12 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1704332064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:51:12 compute-0 nova_compute[270693]: 2025-11-24 18:51:12.975 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:51:13 compute-0 nova_compute[270693]: 2025-11-24 18:51:13.129 270697 WARNING nova.virt.libvirt.driver [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 18:51:13 compute-0 nova_compute[270693]: 2025-11-24 18:51:13.130 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5078MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 18:51:13 compute-0 nova_compute[270693]: 2025-11-24 18:51:13.130 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:51:13 compute-0 nova_compute[270693]: 2025-11-24 18:51:13.131 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:51:13 compute-0 nova_compute[270693]: 2025-11-24 18:51:13.209 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 18:51:13 compute-0 nova_compute[270693]: 2025-11-24 18:51:13.209 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 18:51:13 compute-0 nova_compute[270693]: 2025-11-24 18:51:13.232 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:51:13 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 18:51:13 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2446840728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:51:13 compute-0 nova_compute[270693]: 2025-11-24 18:51:13.663 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:51:13 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1101: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:13 compute-0 nova_compute[270693]: 2025-11-24 18:51:13.669 270697 DEBUG nova.compute.provider_tree [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed in ProviderTree for provider: d1cce7ec-de83-4810-91f8-1852891da8a6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 18:51:13 compute-0 nova_compute[270693]: 2025-11-24 18:51:13.687 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed for provider d1cce7ec-de83-4810-91f8-1852891da8a6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 18:51:13 compute-0 nova_compute[270693]: 2025-11-24 18:51:13.688 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 18:51:13 compute-0 nova_compute[270693]: 2025-11-24 18:51:13.688 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:51:13 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1704332064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:51:13 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2446840728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:51:14 compute-0 sudo[279706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:51:14 compute-0 sudo[279706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:51:14 compute-0 sudo[279706]: pam_unix(sudo:session): session closed for user root
Nov 24 18:51:14 compute-0 sudo[279731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:51:14 compute-0 sudo[279731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:51:14 compute-0 sudo[279731]: pam_unix(sudo:session): session closed for user root
Nov 24 18:51:14 compute-0 nova_compute[270693]: 2025-11-24 18:51:14.675 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:51:14 compute-0 nova_compute[270693]: 2025-11-24 18:51:14.675 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:51:14 compute-0 sudo[279756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:51:14 compute-0 sudo[279756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:51:14 compute-0 sudo[279756]: pam_unix(sudo:session): session closed for user root
Nov 24 18:51:14 compute-0 nova_compute[270693]: 2025-11-24 18:51:14.696 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:51:14 compute-0 nova_compute[270693]: 2025-11-24 18:51:14.697 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:51:14 compute-0 nova_compute[270693]: 2025-11-24 18:51:14.697 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 18:51:14 compute-0 sudo[279781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 18:51:14 compute-0 sudo[279781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:51:14 compute-0 ceph-mon[74927]: pgmap v1101: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:15 compute-0 sudo[279781]: pam_unix(sudo:session): session closed for user root
Nov 24 18:51:15 compute-0 sudo[279838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:51:15 compute-0 sudo[279838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:51:15 compute-0 sudo[279838]: pam_unix(sudo:session): session closed for user root
Nov 24 18:51:15 compute-0 sudo[279863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:51:15 compute-0 sudo[279863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:51:15 compute-0 sudo[279863]: pam_unix(sudo:session): session closed for user root
Nov 24 18:51:15 compute-0 sudo[279888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:51:15 compute-0 sudo[279888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:51:15 compute-0 sudo[279888]: pam_unix(sudo:session): session closed for user root
Nov 24 18:51:15 compute-0 nova_compute[270693]: 2025-11-24 18:51:15.530 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:51:15 compute-0 sudo[279913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- inventory --format=json-pretty --filter-for-batch
Nov 24 18:51:15 compute-0 sudo[279913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:51:15 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1102: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:16 compute-0 podman[279979]: 2025-11-24 18:51:16.005973855 +0000 UTC m=+0.071860357 container create 93d7c9a76e7a5c1e0406da46098d4fd2612476dd0332a1654cdc93981b2c85b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_buck, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 24 18:51:16 compute-0 systemd[1]: Started libpod-conmon-93d7c9a76e7a5c1e0406da46098d4fd2612476dd0332a1654cdc93981b2c85b8.scope.
Nov 24 18:51:16 compute-0 podman[279979]: 2025-11-24 18:51:15.975534547 +0000 UTC m=+0.041421109 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:51:16 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:51:16 compute-0 podman[279979]: 2025-11-24 18:51:16.099987735 +0000 UTC m=+0.165874237 container init 93d7c9a76e7a5c1e0406da46098d4fd2612476dd0332a1654cdc93981b2c85b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:51:16 compute-0 podman[279979]: 2025-11-24 18:51:16.107719765 +0000 UTC m=+0.173606237 container start 93d7c9a76e7a5c1e0406da46098d4fd2612476dd0332a1654cdc93981b2c85b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_buck, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:51:16 compute-0 podman[279979]: 2025-11-24 18:51:16.111231951 +0000 UTC m=+0.177118423 container attach 93d7c9a76e7a5c1e0406da46098d4fd2612476dd0332a1654cdc93981b2c85b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:51:16 compute-0 awesome_buck[279995]: 167 167
Nov 24 18:51:16 compute-0 systemd[1]: libpod-93d7c9a76e7a5c1e0406da46098d4fd2612476dd0332a1654cdc93981b2c85b8.scope: Deactivated successfully.
Nov 24 18:51:16 compute-0 podman[279979]: 2025-11-24 18:51:16.117596837 +0000 UTC m=+0.183483339 container died 93d7c9a76e7a5c1e0406da46098d4fd2612476dd0332a1654cdc93981b2c85b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_buck, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 24 18:51:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef0c84802b238213d6085ab9eb80ec4376933e23cadd268316e5212a66b66fd0-merged.mount: Deactivated successfully.
Nov 24 18:51:16 compute-0 podman[279979]: 2025-11-24 18:51:16.174229899 +0000 UTC m=+0.240116401 container remove 93d7c9a76e7a5c1e0406da46098d4fd2612476dd0332a1654cdc93981b2c85b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_buck, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:51:16 compute-0 systemd[1]: libpod-conmon-93d7c9a76e7a5c1e0406da46098d4fd2612476dd0332a1654cdc93981b2c85b8.scope: Deactivated successfully.
Nov 24 18:51:16 compute-0 podman[280018]: 2025-11-24 18:51:16.413188211 +0000 UTC m=+0.072758419 container create 17e266acf06dfd8914a2e8bd6098dfb5bccda79b26246815a3ac1f51fd73dbb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:51:16 compute-0 systemd[1]: Started libpod-conmon-17e266acf06dfd8914a2e8bd6098dfb5bccda79b26246815a3ac1f51fd73dbb0.scope.
Nov 24 18:51:16 compute-0 podman[280018]: 2025-11-24 18:51:16.384879625 +0000 UTC m=+0.044449913 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:51:16 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:51:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60e756399f52dc5572c3c388b63749fc1133525b7297ec993a57d035de05f4c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:51:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60e756399f52dc5572c3c388b63749fc1133525b7297ec993a57d035de05f4c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:51:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60e756399f52dc5572c3c388b63749fc1133525b7297ec993a57d035de05f4c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:51:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60e756399f52dc5572c3c388b63749fc1133525b7297ec993a57d035de05f4c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:51:16 compute-0 nova_compute[270693]: 2025-11-24 18:51:16.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:51:16 compute-0 nova_compute[270693]: 2025-11-24 18:51:16.530 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:51:16 compute-0 podman[280018]: 2025-11-24 18:51:16.5356577 +0000 UTC m=+0.195227888 container init 17e266acf06dfd8914a2e8bd6098dfb5bccda79b26246815a3ac1f51fd73dbb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 18:51:16 compute-0 podman[280018]: 2025-11-24 18:51:16.548469395 +0000 UTC m=+0.208039613 container start 17e266acf06dfd8914a2e8bd6098dfb5bccda79b26246815a3ac1f51fd73dbb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_roentgen, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:51:16 compute-0 podman[280018]: 2025-11-24 18:51:16.553996221 +0000 UTC m=+0.213566429 container attach 17e266acf06dfd8914a2e8bd6098dfb5bccda79b26246815a3ac1f51fd73dbb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_roentgen, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 18:51:16 compute-0 ceph-mon[74927]: pgmap v1102: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:17 compute-0 nova_compute[270693]: 2025-11-24 18:51:17.528 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:51:17 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1103: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:51:18 compute-0 pedantic_roentgen[280034]: [
Nov 24 18:51:18 compute-0 pedantic_roentgen[280034]:     {
Nov 24 18:51:18 compute-0 pedantic_roentgen[280034]:         "available": false,
Nov 24 18:51:18 compute-0 pedantic_roentgen[280034]:         "ceph_device": false,
Nov 24 18:51:18 compute-0 pedantic_roentgen[280034]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 24 18:51:18 compute-0 pedantic_roentgen[280034]:         "lsm_data": {},
Nov 24 18:51:18 compute-0 pedantic_roentgen[280034]:         "lvs": [],
Nov 24 18:51:18 compute-0 pedantic_roentgen[280034]:         "path": "/dev/sr0",
Nov 24 18:51:18 compute-0 pedantic_roentgen[280034]:         "rejected_reasons": [
Nov 24 18:51:18 compute-0 pedantic_roentgen[280034]:             "Has a FileSystem",
Nov 24 18:51:18 compute-0 pedantic_roentgen[280034]:             "Insufficient space (<5GB)"
Nov 24 18:51:18 compute-0 pedantic_roentgen[280034]:         ],
Nov 24 18:51:18 compute-0 pedantic_roentgen[280034]:         "sys_api": {
Nov 24 18:51:18 compute-0 pedantic_roentgen[280034]:             "actuators": null,
Nov 24 18:51:18 compute-0 pedantic_roentgen[280034]:             "device_nodes": "sr0",
Nov 24 18:51:18 compute-0 pedantic_roentgen[280034]:             "devname": "sr0",
Nov 24 18:51:18 compute-0 pedantic_roentgen[280034]:             "human_readable_size": "482.00 KB",
Nov 24 18:51:18 compute-0 pedantic_roentgen[280034]:             "id_bus": "ata",
Nov 24 18:51:18 compute-0 pedantic_roentgen[280034]:             "model": "QEMU DVD-ROM",
Nov 24 18:51:18 compute-0 pedantic_roentgen[280034]:             "nr_requests": "2",
Nov 24 18:51:18 compute-0 pedantic_roentgen[280034]:             "parent": "/dev/sr0",
Nov 24 18:51:18 compute-0 pedantic_roentgen[280034]:             "partitions": {},
Nov 24 18:51:18 compute-0 pedantic_roentgen[280034]:             "path": "/dev/sr0",
Nov 24 18:51:18 compute-0 pedantic_roentgen[280034]:             "removable": "1",
Nov 24 18:51:18 compute-0 pedantic_roentgen[280034]:             "rev": "2.5+",
Nov 24 18:51:18 compute-0 pedantic_roentgen[280034]:             "ro": "0",
Nov 24 18:51:18 compute-0 pedantic_roentgen[280034]:             "rotational": "1",
Nov 24 18:51:18 compute-0 pedantic_roentgen[280034]:             "sas_address": "",
Nov 24 18:51:18 compute-0 pedantic_roentgen[280034]:             "sas_device_handle": "",
Nov 24 18:51:18 compute-0 pedantic_roentgen[280034]:             "scheduler_mode": "mq-deadline",
Nov 24 18:51:18 compute-0 pedantic_roentgen[280034]:             "sectors": 0,
Nov 24 18:51:18 compute-0 pedantic_roentgen[280034]:             "sectorsize": "2048",
Nov 24 18:51:18 compute-0 pedantic_roentgen[280034]:             "size": 493568.0,
Nov 24 18:51:18 compute-0 pedantic_roentgen[280034]:             "support_discard": "2048",
Nov 24 18:51:18 compute-0 pedantic_roentgen[280034]:             "type": "disk",
Nov 24 18:51:18 compute-0 pedantic_roentgen[280034]:             "vendor": "QEMU"
Nov 24 18:51:18 compute-0 pedantic_roentgen[280034]:         }
Nov 24 18:51:18 compute-0 pedantic_roentgen[280034]:     }
Nov 24 18:51:18 compute-0 pedantic_roentgen[280034]: ]
Nov 24 18:51:18 compute-0 systemd[1]: libpod-17e266acf06dfd8914a2e8bd6098dfb5bccda79b26246815a3ac1f51fd73dbb0.scope: Deactivated successfully.
Nov 24 18:51:18 compute-0 podman[280018]: 2025-11-24 18:51:18.130812455 +0000 UTC m=+1.790382623 container died 17e266acf06dfd8914a2e8bd6098dfb5bccda79b26246815a3ac1f51fd73dbb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:51:18 compute-0 systemd[1]: libpod-17e266acf06dfd8914a2e8bd6098dfb5bccda79b26246815a3ac1f51fd73dbb0.scope: Consumed 1.635s CPU time.
Nov 24 18:51:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-60e756399f52dc5572c3c388b63749fc1133525b7297ec993a57d035de05f4c4-merged.mount: Deactivated successfully.
Nov 24 18:51:18 compute-0 podman[280018]: 2025-11-24 18:51:18.180880856 +0000 UTC m=+1.840451024 container remove 17e266acf06dfd8914a2e8bd6098dfb5bccda79b26246815a3ac1f51fd73dbb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:51:18 compute-0 systemd[1]: libpod-conmon-17e266acf06dfd8914a2e8bd6098dfb5bccda79b26246815a3ac1f51fd73dbb0.scope: Deactivated successfully.
Nov 24 18:51:18 compute-0 sudo[279913]: pam_unix(sudo:session): session closed for user root
Nov 24 18:51:18 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:51:18 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:51:18 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:51:18 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:51:18 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 24 18:51:18 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 18:51:18 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:51:18 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:51:18 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:51:18 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:51:18 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:51:18 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:51:18 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 31c098d9-8c9d-4d64-a5d1-e75c3c6d16f0 does not exist
Nov 24 18:51:18 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 1058b7ed-47b5-45c2-a1c3-845433273a2d does not exist
Nov 24 18:51:18 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev fa19832c-bf97-4640-8cd2-c3e53d849d34 does not exist
Nov 24 18:51:18 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 18:51:18 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:51:18 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 18:51:18 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:51:18 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:51:18 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:51:18 compute-0 sudo[282074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:51:18 compute-0 sudo[282074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:51:18 compute-0 sudo[282074]: pam_unix(sudo:session): session closed for user root
Nov 24 18:51:18 compute-0 sudo[282099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:51:18 compute-0 sudo[282099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:51:18 compute-0 sudo[282099]: pam_unix(sudo:session): session closed for user root
Nov 24 18:51:18 compute-0 sudo[282124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:51:18 compute-0 sudo[282124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:51:18 compute-0 sudo[282124]: pam_unix(sudo:session): session closed for user root
Nov 24 18:51:18 compute-0 sudo[282149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 18:51:18 compute-0 sudo[282149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:51:18 compute-0 podman[282214]: 2025-11-24 18:51:18.797643151 +0000 UTC m=+0.033189647 container create 0431124be64285e982ea8595a79e6a473ec61dbaede7bc28925adbd3fe8a4853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 24 18:51:18 compute-0 systemd[1]: Started libpod-conmon-0431124be64285e982ea8595a79e6a473ec61dbaede7bc28925adbd3fe8a4853.scope.
Nov 24 18:51:18 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:51:18 compute-0 podman[282214]: 2025-11-24 18:51:18.783683128 +0000 UTC m=+0.019229644 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:51:18 compute-0 podman[282214]: 2025-11-24 18:51:18.880705312 +0000 UTC m=+0.116251818 container init 0431124be64285e982ea8595a79e6a473ec61dbaede7bc28925adbd3fe8a4853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:51:18 compute-0 podman[282214]: 2025-11-24 18:51:18.890482472 +0000 UTC m=+0.126028958 container start 0431124be64285e982ea8595a79e6a473ec61dbaede7bc28925adbd3fe8a4853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 18:51:18 compute-0 podman[282214]: 2025-11-24 18:51:18.893512526 +0000 UTC m=+0.129059032 container attach 0431124be64285e982ea8595a79e6a473ec61dbaede7bc28925adbd3fe8a4853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 24 18:51:18 compute-0 goofy_lederberg[282230]: 167 167
Nov 24 18:51:18 compute-0 systemd[1]: libpod-0431124be64285e982ea8595a79e6a473ec61dbaede7bc28925adbd3fe8a4853.scope: Deactivated successfully.
Nov 24 18:51:18 compute-0 podman[282214]: 2025-11-24 18:51:18.897986346 +0000 UTC m=+0.133532832 container died 0431124be64285e982ea8595a79e6a473ec61dbaede7bc28925adbd3fe8a4853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 18:51:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-a204410acf5033ac86def89c76b65a619a384ad96b735e8710cef799b54f6693-merged.mount: Deactivated successfully.
Nov 24 18:51:18 compute-0 podman[282214]: 2025-11-24 18:51:18.935203401 +0000 UTC m=+0.170749897 container remove 0431124be64285e982ea8595a79e6a473ec61dbaede7bc28925adbd3fe8a4853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lederberg, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Nov 24 18:51:18 compute-0 systemd[1]: libpod-conmon-0431124be64285e982ea8595a79e6a473ec61dbaede7bc28925adbd3fe8a4853.scope: Deactivated successfully.
Nov 24 18:51:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 18:51:19 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/55251396' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 18:51:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 18:51:19 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/55251396' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 18:51:19 compute-0 podman[282253]: 2025-11-24 18:51:19.081856244 +0000 UTC m=+0.036430586 container create 58d69940405dfc52e65d32f3b2555e03f80b22b54e3977b5f027994061de1cab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:51:19 compute-0 systemd[1]: Started libpod-conmon-58d69940405dfc52e65d32f3b2555e03f80b22b54e3977b5f027994061de1cab.scope.
Nov 24 18:51:19 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:51:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee045df71c7cf1bae164319a3f5fc18bed9c36eb2af2d03be2a140509af04bcc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:51:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee045df71c7cf1bae164319a3f5fc18bed9c36eb2af2d03be2a140509af04bcc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:51:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee045df71c7cf1bae164319a3f5fc18bed9c36eb2af2d03be2a140509af04bcc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:51:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee045df71c7cf1bae164319a3f5fc18bed9c36eb2af2d03be2a140509af04bcc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:51:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee045df71c7cf1bae164319a3f5fc18bed9c36eb2af2d03be2a140509af04bcc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:51:19 compute-0 podman[282253]: 2025-11-24 18:51:19.066407425 +0000 UTC m=+0.020981777 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:51:19 compute-0 podman[282253]: 2025-11-24 18:51:19.169824156 +0000 UTC m=+0.124398548 container init 58d69940405dfc52e65d32f3b2555e03f80b22b54e3977b5f027994061de1cab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dewdney, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:51:19 compute-0 podman[282253]: 2025-11-24 18:51:19.175868505 +0000 UTC m=+0.130442877 container start 58d69940405dfc52e65d32f3b2555e03f80b22b54e3977b5f027994061de1cab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dewdney, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:51:19 compute-0 podman[282253]: 2025-11-24 18:51:19.180542749 +0000 UTC m=+0.135117121 container attach 58d69940405dfc52e65d32f3b2555e03f80b22b54e3977b5f027994061de1cab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Nov 24 18:51:19 compute-0 ceph-mon[74927]: pgmap v1103: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:19 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:51:19 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:51:19 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 18:51:19 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:51:19 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:51:19 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:51:19 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:51:19 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:51:19 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:51:19 compute-0 ceph-mon[74927]: from='client.? 192.168.122.10:0/55251396' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 18:51:19 compute-0 ceph-mon[74927]: from='client.? 192.168.122.10:0/55251396' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 18:51:19 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1104: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:20 compute-0 adoring_dewdney[282270]: --> passed data devices: 0 physical, 3 LVM
Nov 24 18:51:20 compute-0 adoring_dewdney[282270]: --> relative data size: 1.0
Nov 24 18:51:20 compute-0 adoring_dewdney[282270]: --> All data devices are unavailable
Nov 24 18:51:20 compute-0 systemd[1]: libpod-58d69940405dfc52e65d32f3b2555e03f80b22b54e3977b5f027994061de1cab.scope: Deactivated successfully.
Nov 24 18:51:20 compute-0 systemd[1]: libpod-58d69940405dfc52e65d32f3b2555e03f80b22b54e3977b5f027994061de1cab.scope: Consumed 1.122s CPU time.
Nov 24 18:51:20 compute-0 podman[282253]: 2025-11-24 18:51:20.357490919 +0000 UTC m=+1.312065271 container died 58d69940405dfc52e65d32f3b2555e03f80b22b54e3977b5f027994061de1cab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:51:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee045df71c7cf1bae164319a3f5fc18bed9c36eb2af2d03be2a140509af04bcc-merged.mount: Deactivated successfully.
Nov 24 18:51:20 compute-0 podman[282253]: 2025-11-24 18:51:20.403039209 +0000 UTC m=+1.357613541 container remove 58d69940405dfc52e65d32f3b2555e03f80b22b54e3977b5f027994061de1cab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 18:51:20 compute-0 systemd[1]: libpod-conmon-58d69940405dfc52e65d32f3b2555e03f80b22b54e3977b5f027994061de1cab.scope: Deactivated successfully.
Nov 24 18:51:20 compute-0 sudo[282149]: pam_unix(sudo:session): session closed for user root
Nov 24 18:51:20 compute-0 sudo[282311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:51:20 compute-0 sudo[282311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:51:20 compute-0 sudo[282311]: pam_unix(sudo:session): session closed for user root
Nov 24 18:51:20 compute-0 sudo[282336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:51:20 compute-0 sudo[282336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:51:20 compute-0 sudo[282336]: pam_unix(sudo:session): session closed for user root
Nov 24 18:51:20 compute-0 sudo[282361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:51:20 compute-0 sudo[282361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:51:20 compute-0 sudo[282361]: pam_unix(sudo:session): session closed for user root
Nov 24 18:51:20 compute-0 sudo[282386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- lvm list --format json
Nov 24 18:51:20 compute-0 sudo[282386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:51:21 compute-0 podman[282451]: 2025-11-24 18:51:21.171569743 +0000 UTC m=+0.079819712 container create 5dfaa8ed454a6a33d1d38bdd8858db4285d2085b6e7dd25e3764a36473d70373 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 18:51:21 compute-0 systemd[1]: Started libpod-conmon-5dfaa8ed454a6a33d1d38bdd8858db4285d2085b6e7dd25e3764a36473d70373.scope.
Nov 24 18:51:21 compute-0 podman[282451]: 2025-11-24 18:51:21.142797336 +0000 UTC m=+0.051047365 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:51:21 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:51:21 compute-0 ceph-mon[74927]: pgmap v1104: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:21 compute-0 podman[282451]: 2025-11-24 18:51:21.258413957 +0000 UTC m=+0.166663956 container init 5dfaa8ed454a6a33d1d38bdd8858db4285d2085b6e7dd25e3764a36473d70373 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_elbakyan, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:51:21 compute-0 podman[282451]: 2025-11-24 18:51:21.265739857 +0000 UTC m=+0.173989806 container start 5dfaa8ed454a6a33d1d38bdd8858db4285d2085b6e7dd25e3764a36473d70373 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_elbakyan, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 24 18:51:21 compute-0 podman[282451]: 2025-11-24 18:51:21.26951941 +0000 UTC m=+0.177769379 container attach 5dfaa8ed454a6a33d1d38bdd8858db4285d2085b6e7dd25e3764a36473d70373 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 24 18:51:21 compute-0 exciting_elbakyan[282467]: 167 167
Nov 24 18:51:21 compute-0 systemd[1]: libpod-5dfaa8ed454a6a33d1d38bdd8858db4285d2085b6e7dd25e3764a36473d70373.scope: Deactivated successfully.
Nov 24 18:51:21 compute-0 podman[282451]: 2025-11-24 18:51:21.271866587 +0000 UTC m=+0.180116526 container died 5dfaa8ed454a6a33d1d38bdd8858db4285d2085b6e7dd25e3764a36473d70373 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:51:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-8316b3db69730bc998ad46820752cbe0c93a6164474503742ef1d673a5fbfede-merged.mount: Deactivated successfully.
Nov 24 18:51:21 compute-0 podman[282451]: 2025-11-24 18:51:21.311408019 +0000 UTC m=+0.219657988 container remove 5dfaa8ed454a6a33d1d38bdd8858db4285d2085b6e7dd25e3764a36473d70373 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_elbakyan, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 18:51:21 compute-0 systemd[1]: libpod-conmon-5dfaa8ed454a6a33d1d38bdd8858db4285d2085b6e7dd25e3764a36473d70373.scope: Deactivated successfully.
Nov 24 18:51:21 compute-0 podman[282492]: 2025-11-24 18:51:21.524460143 +0000 UTC m=+0.041851669 container create cedf4871345cb2c13cd9b27ec618b471943642448ecb920721b909336ce8dd16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_davinci, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 18:51:21 compute-0 systemd[1]: Started libpod-conmon-cedf4871345cb2c13cd9b27ec618b471943642448ecb920721b909336ce8dd16.scope.
Nov 24 18:51:21 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:51:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a52bb2514e2be603a824d7cb00d8eff7d35d954ff9cc4dddef572ee3d0df3e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:51:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a52bb2514e2be603a824d7cb00d8eff7d35d954ff9cc4dddef572ee3d0df3e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:51:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a52bb2514e2be603a824d7cb00d8eff7d35d954ff9cc4dddef572ee3d0df3e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:51:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a52bb2514e2be603a824d7cb00d8eff7d35d954ff9cc4dddef572ee3d0df3e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:51:21 compute-0 podman[282492]: 2025-11-24 18:51:21.504719268 +0000 UTC m=+0.022110764 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:51:21 compute-0 podman[282492]: 2025-11-24 18:51:21.604673024 +0000 UTC m=+0.122064540 container init cedf4871345cb2c13cd9b27ec618b471943642448ecb920721b909336ce8dd16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_davinci, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:51:21 compute-0 podman[282492]: 2025-11-24 18:51:21.611571444 +0000 UTC m=+0.128962930 container start cedf4871345cb2c13cd9b27ec618b471943642448ecb920721b909336ce8dd16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_davinci, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:51:21 compute-0 podman[282492]: 2025-11-24 18:51:21.615717056 +0000 UTC m=+0.133108552 container attach cedf4871345cb2c13cd9b27ec618b471943642448ecb920721b909336ce8dd16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_davinci, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:51:21 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1105: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:22 compute-0 charming_davinci[282507]: {
Nov 24 18:51:22 compute-0 charming_davinci[282507]:     "0": [
Nov 24 18:51:22 compute-0 charming_davinci[282507]:         {
Nov 24 18:51:22 compute-0 charming_davinci[282507]:             "devices": [
Nov 24 18:51:22 compute-0 charming_davinci[282507]:                 "/dev/loop3"
Nov 24 18:51:22 compute-0 charming_davinci[282507]:             ],
Nov 24 18:51:22 compute-0 charming_davinci[282507]:             "lv_name": "ceph_lv0",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:             "lv_size": "21470642176",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:             "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:             "name": "ceph_lv0",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:             "tags": {
Nov 24 18:51:22 compute-0 charming_davinci[282507]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:                 "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:                 "ceph.cluster_name": "ceph",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:                 "ceph.crush_device_class": "",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:                 "ceph.encrypted": "0",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:                 "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:                 "ceph.osd_id": "0",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:                 "ceph.type": "block",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:                 "ceph.vdo": "0"
Nov 24 18:51:22 compute-0 charming_davinci[282507]:             },
Nov 24 18:51:22 compute-0 charming_davinci[282507]:             "type": "block",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:             "vg_name": "ceph_vg0"
Nov 24 18:51:22 compute-0 charming_davinci[282507]:         }
Nov 24 18:51:22 compute-0 charming_davinci[282507]:     ],
Nov 24 18:51:22 compute-0 charming_davinci[282507]:     "1": [
Nov 24 18:51:22 compute-0 charming_davinci[282507]:         {
Nov 24 18:51:22 compute-0 charming_davinci[282507]:             "devices": [
Nov 24 18:51:22 compute-0 charming_davinci[282507]:                 "/dev/loop4"
Nov 24 18:51:22 compute-0 charming_davinci[282507]:             ],
Nov 24 18:51:22 compute-0 charming_davinci[282507]:             "lv_name": "ceph_lv1",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:             "lv_size": "21470642176",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:             "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:             "name": "ceph_lv1",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:             "tags": {
Nov 24 18:51:22 compute-0 charming_davinci[282507]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:                 "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:                 "ceph.cluster_name": "ceph",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:                 "ceph.crush_device_class": "",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:                 "ceph.encrypted": "0",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:                 "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:                 "ceph.osd_id": "1",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:                 "ceph.type": "block",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:                 "ceph.vdo": "0"
Nov 24 18:51:22 compute-0 charming_davinci[282507]:             },
Nov 24 18:51:22 compute-0 charming_davinci[282507]:             "type": "block",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:             "vg_name": "ceph_vg1"
Nov 24 18:51:22 compute-0 charming_davinci[282507]:         }
Nov 24 18:51:22 compute-0 charming_davinci[282507]:     ],
Nov 24 18:51:22 compute-0 charming_davinci[282507]:     "2": [
Nov 24 18:51:22 compute-0 charming_davinci[282507]:         {
Nov 24 18:51:22 compute-0 charming_davinci[282507]:             "devices": [
Nov 24 18:51:22 compute-0 charming_davinci[282507]:                 "/dev/loop5"
Nov 24 18:51:22 compute-0 charming_davinci[282507]:             ],
Nov 24 18:51:22 compute-0 charming_davinci[282507]:             "lv_name": "ceph_lv2",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:             "lv_size": "21470642176",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:             "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:             "name": "ceph_lv2",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:             "tags": {
Nov 24 18:51:22 compute-0 charming_davinci[282507]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:                 "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:                 "ceph.cluster_name": "ceph",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:                 "ceph.crush_device_class": "",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:                 "ceph.encrypted": "0",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:                 "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:                 "ceph.osd_id": "2",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:                 "ceph.type": "block",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:                 "ceph.vdo": "0"
Nov 24 18:51:22 compute-0 charming_davinci[282507]:             },
Nov 24 18:51:22 compute-0 charming_davinci[282507]:             "type": "block",
Nov 24 18:51:22 compute-0 charming_davinci[282507]:             "vg_name": "ceph_vg2"
Nov 24 18:51:22 compute-0 charming_davinci[282507]:         }
Nov 24 18:51:22 compute-0 charming_davinci[282507]:     ]
Nov 24 18:51:22 compute-0 charming_davinci[282507]: }
Nov 24 18:51:22 compute-0 systemd[1]: libpod-cedf4871345cb2c13cd9b27ec618b471943642448ecb920721b909336ce8dd16.scope: Deactivated successfully.
Nov 24 18:51:22 compute-0 podman[282492]: 2025-11-24 18:51:22.415148859 +0000 UTC m=+0.932540365 container died cedf4871345cb2c13cd9b27ec618b471943642448ecb920721b909336ce8dd16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_davinci, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:51:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a52bb2514e2be603a824d7cb00d8eff7d35d954ff9cc4dddef572ee3d0df3e1-merged.mount: Deactivated successfully.
Nov 24 18:51:22 compute-0 podman[282492]: 2025-11-24 18:51:22.47824547 +0000 UTC m=+0.995636956 container remove cedf4871345cb2c13cd9b27ec618b471943642448ecb920721b909336ce8dd16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_davinci, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:51:22 compute-0 systemd[1]: libpod-conmon-cedf4871345cb2c13cd9b27ec618b471943642448ecb920721b909336ce8dd16.scope: Deactivated successfully.
Nov 24 18:51:22 compute-0 sudo[282386]: pam_unix(sudo:session): session closed for user root
Nov 24 18:51:22 compute-0 sudo[282528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:51:22 compute-0 sudo[282528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:51:22 compute-0 sudo[282528]: pam_unix(sudo:session): session closed for user root
Nov 24 18:51:22 compute-0 sshd-session[282531]: Accepted publickey for zuul from 192.168.122.10 port 41606 ssh2: ECDSA SHA256:jrbRhTO0vQ5011EeK1ZbrK2vK+fcQyIDL9kUBqDHBYY
Nov 24 18:51:22 compute-0 systemd-logind[822]: New session 54 of user zuul.
Nov 24 18:51:22 compute-0 systemd[1]: Started Session 54 of User zuul.
Nov 24 18:51:22 compute-0 sshd-session[282531]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 18:51:22 compute-0 sudo[282555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:51:22 compute-0 sudo[282555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:51:22 compute-0 sudo[282555]: pam_unix(sudo:session): session closed for user root
Nov 24 18:51:22 compute-0 sudo[282582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:51:22 compute-0 sudo[282582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:51:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:51:22.745 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:51:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:51:22.747 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:51:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:51:22.747 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:51:22 compute-0 sudo[282582]: pam_unix(sudo:session): session closed for user root
Nov 24 18:51:22 compute-0 sudo[282605]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Nov 24 18:51:22 compute-0 sudo[282605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:51:22 compute-0 sudo[282630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- raw list --format json
Nov 24 18:51:22 compute-0 sudo[282630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:51:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:51:23 compute-0 podman[282708]: 2025-11-24 18:51:23.178622889 +0000 UTC m=+0.043341256 container create bd5265ac06f8f751ea36301be0fbfa457c94d473b027d5aac629dbece2389e00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 24 18:51:23 compute-0 podman[282708]: 2025-11-24 18:51:23.153739348 +0000 UTC m=+0.018457735 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:51:23 compute-0 ceph-mon[74927]: pgmap v1105: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:23 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1106: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:23 compute-0 systemd[1]: Started libpod-conmon-bd5265ac06f8f751ea36301be0fbfa457c94d473b027d5aac629dbece2389e00.scope.
Nov 24 18:51:23 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:51:23 compute-0 podman[282708]: 2025-11-24 18:51:23.877488442 +0000 UTC m=+0.742206829 container init bd5265ac06f8f751ea36301be0fbfa457c94d473b027d5aac629dbece2389e00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_bohr, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:51:23 compute-0 podman[282708]: 2025-11-24 18:51:23.883539521 +0000 UTC m=+0.748257888 container start bd5265ac06f8f751ea36301be0fbfa457c94d473b027d5aac629dbece2389e00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_bohr, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 18:51:23 compute-0 podman[282708]: 2025-11-24 18:51:23.886890363 +0000 UTC m=+0.751608750 container attach bd5265ac06f8f751ea36301be0fbfa457c94d473b027d5aac629dbece2389e00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_bohr, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 18:51:23 compute-0 upbeat_bohr[282725]: 167 167
Nov 24 18:51:23 compute-0 systemd[1]: libpod-bd5265ac06f8f751ea36301be0fbfa457c94d473b027d5aac629dbece2389e00.scope: Deactivated successfully.
Nov 24 18:51:23 compute-0 podman[282708]: 2025-11-24 18:51:23.889567429 +0000 UTC m=+0.754285796 container died bd5265ac06f8f751ea36301be0fbfa457c94d473b027d5aac629dbece2389e00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 24 18:51:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a27363e0b0d474f3eca9390dc98774ecbedc351c5c9ad2ec596668c4372b694-merged.mount: Deactivated successfully.
Nov 24 18:51:23 compute-0 podman[282708]: 2025-11-24 18:51:23.925879041 +0000 UTC m=+0.790597408 container remove bd5265ac06f8f751ea36301be0fbfa457c94d473b027d5aac629dbece2389e00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:51:23 compute-0 systemd[1]: libpod-conmon-bd5265ac06f8f751ea36301be0fbfa457c94d473b027d5aac629dbece2389e00.scope: Deactivated successfully.
Nov 24 18:51:24 compute-0 podman[282776]: 2025-11-24 18:51:24.065055091 +0000 UTC m=+0.036351664 container create 5979caf9742fabcf401641f5947071648f37d2436d1f7d13111484f2f9356ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_satoshi, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 18:51:24 compute-0 systemd[1]: Started libpod-conmon-5979caf9742fabcf401641f5947071648f37d2436d1f7d13111484f2f9356ded.scope.
Nov 24 18:51:24 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:51:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5a2517146b6d3cb2433e1f7176cbae3082be80b970e50c5b2b454a76cb1eedd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:51:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5a2517146b6d3cb2433e1f7176cbae3082be80b970e50c5b2b454a76cb1eedd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:51:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5a2517146b6d3cb2433e1f7176cbae3082be80b970e50c5b2b454a76cb1eedd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:51:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5a2517146b6d3cb2433e1f7176cbae3082be80b970e50c5b2b454a76cb1eedd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:51:24 compute-0 podman[282776]: 2025-11-24 18:51:24.050237327 +0000 UTC m=+0.021533930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:51:24 compute-0 podman[282776]: 2025-11-24 18:51:24.14846394 +0000 UTC m=+0.119760533 container init 5979caf9742fabcf401641f5947071648f37d2436d1f7d13111484f2f9356ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_satoshi, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:51:24 compute-0 podman[282776]: 2025-11-24 18:51:24.160079966 +0000 UTC m=+0.131376559 container start 5979caf9742fabcf401641f5947071648f37d2436d1f7d13111484f2f9356ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_satoshi, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:51:24 compute-0 podman[282776]: 2025-11-24 18:51:24.164038543 +0000 UTC m=+0.135335126 container attach 5979caf9742fabcf401641f5947071648f37d2436d1f7d13111484f2f9356ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_satoshi, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 18:51:25 compute-0 trusting_satoshi[282804]: {
Nov 24 18:51:25 compute-0 trusting_satoshi[282804]:     "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 18:51:25 compute-0 trusting_satoshi[282804]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:51:25 compute-0 trusting_satoshi[282804]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 18:51:25 compute-0 trusting_satoshi[282804]:         "osd_id": 0,
Nov 24 18:51:25 compute-0 trusting_satoshi[282804]:         "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:51:25 compute-0 trusting_satoshi[282804]:         "type": "bluestore"
Nov 24 18:51:25 compute-0 trusting_satoshi[282804]:     },
Nov 24 18:51:25 compute-0 trusting_satoshi[282804]:     "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 18:51:25 compute-0 trusting_satoshi[282804]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:51:25 compute-0 trusting_satoshi[282804]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 18:51:25 compute-0 trusting_satoshi[282804]:         "osd_id": 1,
Nov 24 18:51:25 compute-0 trusting_satoshi[282804]:         "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:51:25 compute-0 trusting_satoshi[282804]:         "type": "bluestore"
Nov 24 18:51:25 compute-0 trusting_satoshi[282804]:     },
Nov 24 18:51:25 compute-0 trusting_satoshi[282804]:     "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 18:51:25 compute-0 trusting_satoshi[282804]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:51:25 compute-0 trusting_satoshi[282804]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 18:51:25 compute-0 trusting_satoshi[282804]:         "osd_id": 2,
Nov 24 18:51:25 compute-0 trusting_satoshi[282804]:         "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:51:25 compute-0 trusting_satoshi[282804]:         "type": "bluestore"
Nov 24 18:51:25 compute-0 trusting_satoshi[282804]:     }
Nov 24 18:51:25 compute-0 trusting_satoshi[282804]: }
Nov 24 18:51:25 compute-0 systemd[1]: libpod-5979caf9742fabcf401641f5947071648f37d2436d1f7d13111484f2f9356ded.scope: Deactivated successfully.
Nov 24 18:51:25 compute-0 podman[282776]: 2025-11-24 18:51:25.150266896 +0000 UTC m=+1.121563479 container died 5979caf9742fabcf401641f5947071648f37d2436d1f7d13111484f2f9356ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_satoshi, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:51:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5a2517146b6d3cb2433e1f7176cbae3082be80b970e50c5b2b454a76cb1eedd-merged.mount: Deactivated successfully.
Nov 24 18:51:25 compute-0 podman[282776]: 2025-11-24 18:51:25.204867787 +0000 UTC m=+1.176164390 container remove 5979caf9742fabcf401641f5947071648f37d2436d1f7d13111484f2f9356ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_satoshi, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:51:25 compute-0 systemd[1]: libpod-conmon-5979caf9742fabcf401641f5947071648f37d2436d1f7d13111484f2f9356ded.scope: Deactivated successfully.
Nov 24 18:51:25 compute-0 sudo[282630]: pam_unix(sudo:session): session closed for user root
Nov 24 18:51:25 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:51:25 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:51:25 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:51:25 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:51:25 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 2b8361f4-e302-40fe-b949-d2ce88bab0cc does not exist
Nov 24 18:51:25 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 5fa2154a-deb7-45cf-81cd-15b6bdcffeed does not exist
Nov 24 18:51:25 compute-0 ceph-mon[74927]: pgmap v1106: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:25 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:51:25 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:51:25 compute-0 sudo[282923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:51:25 compute-0 sudo[282923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:51:25 compute-0 sudo[282923]: pam_unix(sudo:session): session closed for user root
Nov 24 18:51:25 compute-0 sudo[282967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:51:25 compute-0 sudo[282967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:51:25 compute-0 sudo[282967]: pam_unix(sudo:session): session closed for user root
Nov 24 18:51:25 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14743 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:51:25 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1107: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:26 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14745 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:51:26 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 24 18:51:26 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3702689370' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 18:51:27 compute-0 ceph-mon[74927]: from='client.14743 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:51:27 compute-0 ceph-mon[74927]: pgmap v1107: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:27 compute-0 ceph-mon[74927]: from='client.14745 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:51:27 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3702689370' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 18:51:27 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1108: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:51:29 compute-0 ceph-mon[74927]: pgmap v1108: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:29 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1109: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:31 compute-0 ceph-mon[74927]: pgmap v1109: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:31 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1110: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:32 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:51:33 compute-0 ceph-mon[74927]: pgmap v1110: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:33 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1111: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:51:34
Nov 24 18:51:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:51:34 compute-0 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 18:51:34 compute-0 ceph-mgr[75218]: [balancer INFO root] pools ['volumes', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', 'vms']
Nov 24 18:51:34 compute-0 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 18:51:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:51:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:51:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:51:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:51:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:51:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:51:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:51:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:51:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:51:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:51:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:51:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:51:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:51:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:51:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:51:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:51:35 compute-0 ceph-mon[74927]: pgmap v1111: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:35 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1112: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:37 compute-0 ceph-mon[74927]: pgmap v1112: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:37 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1113: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:37 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:51:39 compute-0 podman[283147]: 2025-11-24 18:51:39.01144895 +0000 UTC m=+0.086262001 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3)
Nov 24 18:51:39 compute-0 podman[283145]: 2025-11-24 18:51:39.018723439 +0000 UTC m=+0.103359031 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 24 18:51:39 compute-0 podman[283146]: 2025-11-24 18:51:39.018884353 +0000 UTC m=+0.104225872 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 24 18:51:39 compute-0 ceph-mon[74927]: pgmap v1113: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:39 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1114: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:41 compute-0 ceph-mon[74927]: pgmap v1114: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:41 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1115: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:42 compute-0 ovs-vsctl[283239]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 24 18:51:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:51:43 compute-0 virtqemud[270425]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 24 18:51:43 compute-0 virtqemud[270425]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 24 18:51:43 compute-0 virtqemud[270425]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 24 18:51:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:51:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:51:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:51:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:51:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 18:51:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:51:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 18:51:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:51:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:51:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:51:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 18:51:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:51:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 18:51:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:51:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:51:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:51:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 18:51:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:51:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 18:51:43 compute-0 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 18:51:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:51:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:51:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:51:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 18:51:43 compute-0 ceph-mon[74927]: pgmap v1115: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:43 compute-0 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: cache status {prefix=cache status} (starting...)
Nov 24 18:51:43 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1116: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:43 compute-0 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: client ls {prefix=client ls} (starting...)
Nov 24 18:51:43 compute-0 lvm[283599]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 24 18:51:43 compute-0 lvm[283599]: VG ceph_vg2 finished
Nov 24 18:51:44 compute-0 lvm[283606]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 24 18:51:44 compute-0 lvm[283606]: VG ceph_vg1 finished
Nov 24 18:51:44 compute-0 lvm[283609]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 18:51:44 compute-0 lvm[283609]: VG ceph_vg0 finished
Nov 24 18:51:44 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14749 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:51:44 compute-0 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: damage ls {prefix=damage ls} (starting...)
Nov 24 18:51:44 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14751 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:51:44 compute-0 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: dump loads {prefix=dump loads} (starting...)
Nov 24 18:51:44 compute-0 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 24 18:51:44 compute-0 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 24 18:51:44 compute-0 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 24 18:51:44 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 24 18:51:44 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4069699190' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 24 18:51:45 compute-0 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 24 18:51:45 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14757 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:51:45 compute-0 ceph-mgr[75218]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 24 18:51:45 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:51:45.220+0000 7f6377bb5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 24 18:51:45 compute-0 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 24 18:51:45 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:51:45 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2916895032' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:51:45 compute-0 ceph-mon[74927]: pgmap v1116: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:45 compute-0 ceph-mon[74927]: from='client.14749 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:51:45 compute-0 ceph-mon[74927]: from='client.14751 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:51:45 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/4069699190' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 24 18:51:45 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2916895032' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:51:45 compute-0 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 24 18:51:45 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 24 18:51:45 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2556796055' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 24 18:51:45 compute-0 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: ops {prefix=ops} (starting...)
Nov 24 18:51:45 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1117: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:45 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 24 18:51:45 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4145218467' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 24 18:51:46 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 24 18:51:46 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2849952787' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 18:51:46 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 24 18:51:46 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3836729523' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 24 18:51:46 compute-0 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: session ls {prefix=session ls} (starting...)
Nov 24 18:51:46 compute-0 ceph-mon[74927]: from='client.14757 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:51:46 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2556796055' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 24 18:51:46 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/4145218467' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 24 18:51:46 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2849952787' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 18:51:46 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3836729523' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 24 18:51:46 compute-0 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: status {prefix=status} (starting...)
Nov 24 18:51:46 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 24 18:51:46 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1898588925' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 18:51:46 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14771 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:51:46 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 24 18:51:46 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2764268314' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 18:51:46 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14775 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:51:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 24 18:51:47 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2393344985' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 18:51:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 24 18:51:47 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1203941721' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 24 18:51:47 compute-0 ceph-mon[74927]: pgmap v1117: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:47 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1898588925' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 18:51:47 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2764268314' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 18:51:47 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2393344985' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 18:51:47 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1203941721' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 24 18:51:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 24 18:51:47 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3593666717' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 24 18:51:47 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1118: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 24 18:51:47 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1770084270' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 24 18:51:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:51:48 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14787 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:51:48 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:51:48.080+0000 7f6377bb5640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 24 18:51:48 compute-0 ceph-mgr[75218]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 24 18:51:48 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 24 18:51:48 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3023611235' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 18:51:48 compute-0 ceph-mon[74927]: from='client.14771 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:51:48 compute-0 ceph-mon[74927]: from='client.14775 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:51:48 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3593666717' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 24 18:51:48 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1770084270' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 24 18:51:48 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3023611235' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 18:51:48 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14791 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:51:48 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 24 18:51:48 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/935284366' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 24 18:51:48 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14793 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:51:48 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 24 18:51:48 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2678684416' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 24 18:51:49 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14797 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:51:49 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 24 18:51:49 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1580327653' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 18:51:49 compute-0 ceph-mon[74927]: pgmap v1118: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:49 compute-0 ceph-mon[74927]: from='client.14787 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:51:49 compute-0 ceph-mon[74927]: from='client.14791 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:51:49 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/935284366' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 24 18:51:49 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2678684416' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 24 18:51:49 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1580327653' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 18:51:49 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14801 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:51:49 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1119: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:49 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 24 18:51:49 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1646305784' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 78 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 78 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 78 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 24 18:51:49 compute-0 ceph-osd[90655]: merge_log_dups log.dups.size()=0olog.dups.size()=9
Nov 24 18:51:49 compute-0 ceph-osd[90655]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=9
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 78 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002156 3 0.000070
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 78 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 78 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 78 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 1736704 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:45.398058+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 78 handle_osd_map epochs [78,79], i have 78, src has [1,79]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.090582848s of 10.453024864s, submitted: 149
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 79 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004428 2 0.000110
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 79 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006589 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 79 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004346 2 0.000082
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 79 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 79 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006672 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 79 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 79 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=78/79 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 79 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=78/79 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 79 handle_osd_map epochs [79,79], i have 79, src has [1,79]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 1728512 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 79 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=78/79 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 79 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=78/79 n=5 ec=59/49 lis/c=78/59 les/c/f=79/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005481 4 0.000244
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 79 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=78/79 n=5 ec=59/49 lis/c=78/59 les/c/f=79/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 79 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=78/79 n=5 ec=59/49 lis/c=78/59 les/c/f=79/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 79 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=78/79 n=5 ec=59/49 lis/c=78/59 les/c/f=79/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 79 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=78/79 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 79 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=78/79 n=6 ec=59/49 lis/c=78/59 les/c/f=79/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009344 4 0.000268
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 79 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=78/79 n=6 ec=59/49 lis/c=78/59 les/c/f=79/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 79 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=78/79 n=6 ec=59/49 lis/c=78/59 les/c/f=79/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 79 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=78/79 n=6 ec=59/49 lis/c=78/59 les/c/f=79/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:46.398234+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 61 sent 59 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:23:15.989129+0000 osd.2 (osd.2) 60 : cluster [DBG] 6.8 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:23:16.003215+0000 osd.2 (osd.2) 61 : cluster [DBG] 6.8 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 1835008 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 61) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:23:15.989129+0000 osd.2 (osd.2) 60 : cluster [DBG] 6.8 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:23:16.003215+0000 osd.2 (osd.2) 61 : cluster [DBG] 6.8 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.e scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.e scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:47.398430+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 63 sent 61 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:23:16.985373+0000 osd.2 (osd.2) 62 : cluster [DBG] 4.e scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:23:16.999468+0000 osd.2 (osd.2) 63 : cluster [DBG] 4.e scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 1835008 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 63) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:23:16.985373+0000 osd.2 (osd.2) 62 : cluster [DBG] 4.e scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:23:16.999468+0000 osd.2 (osd.2) 63 : cluster [DBG] 4.e scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 79 heartbeat osd_stat(store_statfs(0x4fe0e5000/0x0/0x4ffc00000, data 0x6672c/0xe8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:48.398741+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 621572 data_alloc: 218103808 data_used: 126976
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 1818624 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 79 handle_osd_map epochs [79,80], i have 79, src has [1,80]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 80 heartbeat osd_stat(store_statfs(0x4fe0e5000/0x0/0x4ffc00000, data 0x6672c/0xe8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:49.398860+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63152128 unmapped: 1802240 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 80 heartbeat osd_stat(store_statfs(0x4fe0e1000/0x0/0x4ffc00000, data 0x682a9/0xeb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:50.398995+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:23:20.005653+0000 osd.2 (osd.2) 64 : cluster [DBG] 4.1b scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:23:20.019723+0000 osd.2 (osd.2) 65 : cluster [DBG] 4.1b scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63152128 unmapped: 1802240 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 65) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:23:20.005653+0000 osd.2 (osd.2) 64 : cluster [DBG] 4.1b scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:23:20.019723+0000 osd.2 (osd.2) 65 : cluster [DBG] 4.1b scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:51.399204+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 1794048 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 80 heartbeat osd_stat(store_statfs(0x4fe0e2000/0x0/0x4ffc00000, data 0x682a9/0xeb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 80 handle_osd_map epochs [81,81], i have 81, src has [1,81]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:52.399331+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 67 sent 65 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:23:22.014217+0000 osd.2 (osd.2) 66 : cluster [DBG] 4.1a scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:23:22.028272+0000 osd.2 (osd.2) 67 : cluster [DBG] 4.1a scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 1777664 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 67) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:23:22.014217+0000 osd.2 (osd.2) 66 : cluster [DBG] 4.1a scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:23:22.028272+0000 osd.2 (osd.2) 67 : cluster [DBG] 4.1a scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:53.399508+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 69 sent 67 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:23:23.021814+0000 osd.2 (osd.2) 68 : cluster [DBG] 4.1c scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:23:23.035916+0000 osd.2 (osd.2) 69 : cluster [DBG] 4.1c scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 631440 data_alloc: 218103808 data_used: 139264
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 1769472 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 69) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:23:23.021814+0000 osd.2 (osd.2) 68 : cluster [DBG] 4.1c scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:23:23.035916+0000 osd.2 (osd.2) 69 : cluster [DBG] 4.1c scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:54.399676+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 1753088 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 81 handle_osd_map epochs [82,83], i have 81, src has [1,83]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 81 handle_osd_map epochs [82,83], i have 83, src has [1,83]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.c(unlocked)] enter Initial
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=0 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000072 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=0 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000023 1 0.000039
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000191 1 0.000074
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000043 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000310 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.1c(unlocked)] enter Initial
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=0 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000043 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=0 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000014 1 0.000031
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000111 1 0.000104
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000032 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000225 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:55.399809+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:23:24.942176+0000 osd.2 (osd.2) 70 : cluster [DBG] 6.1f scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:23:24.963355+0000 osd.2 (osd.2) 71 : cluster [DBG] 6.1f scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63242240 unmapped: 1712128 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 71) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:23:24.942176+0000 osd.2 (osd.2) 70 : cluster [DBG] 6.1f scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:23:24.963355+0000 osd.2 (osd.2) 71 : cluster [DBG] 6.1f scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 83 handle_osd_map epochs [83,84], i have 83, src has [1,84]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.098665237s of 10.329547882s, submitted: 26
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.962248 2 0.000077
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.962537 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.962575 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000168 1 0.000244
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000012 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.963220 2 0.000136
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.963575 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.963618 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000150 1 0.000221
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000017 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: not registered w/ OSD
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 84 handle_osd_map epochs [84,84], i have 84, src has [1,84]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: not registered w/ OSD
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:56.400594+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:23:25.900272+0000 osd.2 (osd.2) 72 : cluster [DBG] 7.1c scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:23:25.945314+0000 osd.2 (osd.2) 73 : cluster [DBG] 7.1c scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 1703936 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 73) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:23:25.900272+0000 osd.2 (osd.2) 72 : cluster [DBG] 7.1c scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:23:25.945314+0000 osd.2 (osd.2) 73 : cluster [DBG] 7.1c scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 84 handle_osd_map epochs [84,85], i have 84, src has [1,85]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 84 heartbeat osd_stat(store_statfs(0x4fe0d1000/0x0/0x4ffc00000, data 0x70b6e/0xfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 85 pg[9.c( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.094527 6 0.000124
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 85 pg[9.c( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 85 pg[9.c( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 85 pg[9.1c( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.095100 6 0.000210
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 85 pg[9.1c( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 85 pg[9.1c( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: not registered w/ OSD
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: not registered w/ OSD
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 85 pg[9.c( v 55'385 lc 55'74 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.049099 3 0.000163
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 85 pg[9.c( v 55'385 lc 55'74 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 85 pg[9.c( v 55'385 lc 55'74 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000076 1 0.000056
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 85 pg[9.c( v 55'385 lc 55'74 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 85 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.117387 1 0.000067
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 85 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 85 pg[9.1c( v 55'385 lc 55'136 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.166672 3 0.000155
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 85 pg[9.1c( v 55'385 lc 55'136 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 85 pg[9.1c( v 55'385 lc 55'136 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000087 1 0.000041
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 85 pg[9.1c( v 55'385 lc 55'136 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 85 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.208135 1 0.000045
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 85 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:57.400795+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:23:26.943412+0000 osd.2 (osd.2) 74 : cluster [DBG] 3.18 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:23:26.958452+0000 osd.2 (osd.2) 75 : cluster [DBG] 3.18 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 85 handle_osd_map epochs [86,86], i have 85, src has [1,86]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 1646592 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.507480 1 0.000056
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.674167 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started 1.768746 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.299128 1 0.000031
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.674124 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started 1.769312 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 unknown mbc={}] exit Reset 0.000063 1 0.000099
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Start
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 unknown mbc={}] exit Reset 0.000084 1 0.000132
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Start
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000041 1 0.000045
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000047 1 0.000049
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: merge_log_dups log.dups.size()=0olog.dups.size()=10
Nov 24 18:51:49 compute-0 ceph-osd[90655]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=10
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000795 2 0.000037
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 24 18:51:49 compute-0 ceph-osd[90655]: merge_log_dups log.dups.size()=0olog.dups.size()=15
Nov 24 18:51:49 compute-0 ceph-osd[90655]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=15
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000820 3 0.000065
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 75) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:23:26.943412+0000 osd.2 (osd.2) 74 : cluster [DBG] 3.18 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:23:26.958452+0000 osd.2 (osd.2) 75 : cluster [DBG] 3.18 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:58.400994+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 671797 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 86 handle_osd_map epochs [86,87], i have 86, src has [1,87]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 86 handle_osd_map epochs [86,87], i have 87, src has [1,87]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 87 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.914199 2 0.000065
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 87 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.915152 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 87 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 87 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 87 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.914854 3 0.000064
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 87 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.915746 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 87 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 87 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 87 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 87 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/59 les/c/f=87/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003535 3 0.000216
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 87 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/59 les/c/f=87/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 87 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/59 les/c/f=87/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000025 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 87 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/59 les/c/f=87/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 1630208 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 87 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 87 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=6 ec=59/49 lis/c=86/59 les/c/f=87/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.065697 3 0.000158
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 87 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=6 ec=59/49 lis/c=86/59 les/c/f=87/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 87 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=6 ec=59/49 lis/c=86/59 les/c/f=87/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000017 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 87 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=6 ec=59/49 lis/c=86/59 les/c/f=87/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 87 handle_osd_map epochs [87,87], i have 87, src has [1,87]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:59.401110+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 1630208 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861424000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:00.401226+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:23:29.969254+0000 osd.2 (osd.2) 76 : cluster [DBG] 7.15 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:23:29.983334+0000 osd.2 (osd.2) 77 : cluster [DBG] 7.15 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63340544 unmapped: 1613824 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 77) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:23:29.969254+0000 osd.2 (osd.2) 76 : cluster [DBG] 7.15 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:23:29.983334+0000 osd.2 (osd.2) 77 : cluster [DBG] 7.15 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:01.401393+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63348736 unmapped: 1605632 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 87 heartbeat osd_stat(store_statfs(0x4fe0cd000/0x0/0x4ffc00000, data 0x7406c/0x101000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:02.401526+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63348736 unmapped: 1605632 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:03.401665+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 674349 data_alloc: 218103808 data_used: 147456
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 1597440 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.16 deep-scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.16 deep-scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:04.401775+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:23:33.959423+0000 osd.2 (osd.2) 78 : cluster [DBG] 3.16 deep-scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:23:33.973576+0000 osd.2 (osd.2) 79 : cluster [DBG] 3.16 deep-scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 1597440 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 87 handle_osd_map epochs [87,88], i have 87, src has [1,88]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 79) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:23:33.959423+0000 osd.2 (osd.2) 78 : cluster [DBG] 3.16 deep-scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:23:33.973576+0000 osd.2 (osd.2) 79 : cluster [DBG] 3.16 deep-scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:05.401954+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 1531904 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:06.402076+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 1531904 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:07.402222+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 1531904 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 88 heartbeat osd_stat(store_statfs(0x4fe0c9000/0x0/0x4ffc00000, data 0x75be9/0x104000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.654691696s of 12.283568382s, submitted: 60
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:08.402372+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 682801 data_alloc: 218103808 data_used: 159744
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 1474560 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 89 handle_osd_map epochs [89,90], i have 89, src has [1,90]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:09.402455+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 1466368 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:10.402582+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:23:39.894044+0000 osd.2 (osd.2) 80 : cluster [DBG] 7.11 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:23:39.908164+0000 osd.2 (osd.2) 81 : cluster [DBG] 7.11 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 1441792 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 81) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:23:39.894044+0000 osd.2 (osd.2) 80 : cluster [DBG] 7.11 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:23:39.908164+0000 osd.2 (osd.2) 81 : cluster [DBG] 7.11 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:11.402743+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 1441792 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 90 heartbeat osd_stat(store_statfs(0x4fe0c3000/0x0/0x4ffc00000, data 0x792e3/0x10a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:12.402928+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 1433600 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 91 handle_osd_map epochs [92,92], i have 91, src has [1,92]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 92 pg[9.13(unlocked)] enter Initial
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=0 lpr=0 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000051 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=0 lpr=0 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000011 1 0.000027
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000185 1 0.000045
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000043 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000240 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:13.403070+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695051 data_alloc: 218103808 data_used: 163840
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 1433600 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 92 handle_osd_map epochs [92,93], i have 92, src has [1,93]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 92 handle_osd_map epochs [93,93], i have 93, src has [1,93]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.007002 2 0.000066
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.007263 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.007282 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000052 1 0.000077
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000005 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:14.403186+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63569920 unmapped: 1384448 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 93 handle_osd_map epochs [93,94], i have 93, src has [1,94]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fe0b8000/0x0/0x4ffc00000, data 0x7e442/0x113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 94 pg[9.13( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.006363 6 0.000036
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 94 pg[9.13( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 94 pg[9.13( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 94 pg[9.13( v 55'385 lc 55'118 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.007743 3 0.000133
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 94 pg[9.13( v 55'385 lc 55'118 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 94 pg[9.13( v 55'385 lc 55'118 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000110 1 0.000034
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 94 pg[9.13( v 55'385 lc 55'118 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 94 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.039540 1 0.000065
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 94 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:15.403301+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 1269760 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.e scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.e scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 94 handle_osd_map epochs [94,95], i have 94, src has [1,95]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.975169 1 0.000040
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.022681 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started 2.029080 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 unknown mbc={}] exit Reset 0.000099 1 0.000157
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Start
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.010563 2 0.000066
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 95 handle_osd_map epochs [95,95], i have 95, src has [1,95]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: merge_log_dups log.dups.size()=0olog.dups.size()=11
Nov 24 18:51:49 compute-0 ceph-osd[90655]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=11
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001555 2 0.000430
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:16.403421+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:23:45.903844+0000 osd.2 (osd.2) 82 : cluster [DBG] 3.e scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:23:45.917856+0000 osd.2 (osd.2) 83 : cluster [DBG] 3.e scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 1253376 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 95 heartbeat osd_stat(store_statfs(0x4fcf16000/0x0/0x4ffc00000, data 0x7fefc/0x117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 95 handle_osd_map epochs [95,96], i have 95, src has [1,96]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 83) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:23:45.903844+0000 osd.2 (osd.2) 82 : cluster [DBG] 3.e scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:23:45.917856+0000 osd.2 (osd.2) 83 : cluster [DBG] 3.e scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 96 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997622 2 0.000061
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 96 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.009886 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 96 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 96 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=95/96 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 96 handle_osd_map epochs [96,96], i have 96, src has [1,96]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 96 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=95/96 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 96 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=95/96 n=5 ec=59/49 lis/c=95/67 les/c/f=96/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.012413 4 0.000337
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 96 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=95/96 n=5 ec=59/49 lis/c=95/67 les/c/f=96/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 96 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=95/96 n=5 ec=59/49 lis/c=95/67 les/c/f=96/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000015 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 96 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=95/96 n=5 ec=59/49 lis/c=95/67 les/c/f=96/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 96 heartbeat osd_stat(store_statfs(0x4fcf12000/0x0/0x4ffc00000, data 0x81944/0x11a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:17.403641+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63725568 unmapped: 1228800 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:18.403765+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:23:47.864559+0000 osd.2 (osd.2) 84 : cluster [DBG] 3.11 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:23:47.878823+0000 osd.2 (osd.2) 85 : cluster [DBG] 3.11 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 715559 data_alloc: 218103808 data_used: 167936
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63791104 unmapped: 1163264 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 96 heartbeat osd_stat(store_statfs(0x4fcf11000/0x0/0x4ffc00000, data 0x83377/0x11d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 85) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:23:47.864559+0000 osd.2 (osd.2) 84 : cluster [DBG] 3.11 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:23:47.878823+0000 osd.2 (osd.2) 85 : cluster [DBG] 3.11 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:19.403954+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63791104 unmapped: 1163264 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 96 heartbeat osd_stat(store_statfs(0x4fcf11000/0x0/0x4ffc00000, data 0x83377/0x11d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.864919662s of 12.002065659s, submitted: 47
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:20.404099+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63799296 unmapped: 1155072 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 97 handle_osd_map epochs [97,98], i have 97, src has [1,98]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:21.404257+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 1138688 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:22.404427+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:23:51.817847+0000 osd.2 (osd.2) 86 : cluster [DBG] 3.5 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:23:51.832094+0000 osd.2 (osd.2) 87 : cluster [DBG] 3.5 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 1138688 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 98 handle_osd_map epochs [98,99], i have 98, src has [1,99]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 99 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=75) [2] r=0 lpr=75 crt=55'385 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 40.726456 70 0.000222
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 99 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=75) [2] r=0 lpr=75 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active 40.733645 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 99 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=75) [2] r=0 lpr=75 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary 41.747050 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 99 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=75) [2] r=0 lpr=75 crt=55'385 mlcod 0'0 active mbc={}] exit Started 41.747086 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 99 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=75) [2] r=0 lpr=75 crt=55'385 mlcod 0'0 active mbc={}] enter Reset
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 99 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99 pruub=15.274039268s) [0] r=-1 lpr=99 pi=[75,99)/1 crt=55'385 mlcod 0'0 active pruub 200.589569092s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 99 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99 pruub=15.273898125s) [0] r=-1 lpr=99 pi=[75,99)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 200.589569092s@ mbc={}] exit Reset 0.000182 1 0.000280
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 99 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99 pruub=15.273898125s) [0] r=-1 lpr=99 pi=[75,99)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 200.589569092s@ mbc={}] enter Started
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 99 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99 pruub=15.273898125s) [0] r=-1 lpr=99 pi=[75,99)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 200.589569092s@ mbc={}] enter Start
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 99 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99 pruub=15.273898125s) [0] r=-1 lpr=99 pi=[75,99)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 200.589569092s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 99 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99 pruub=15.273898125s) [0] r=-1 lpr=99 pi=[75,99)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 200.589569092s@ mbc={}] exit Start 0.000044 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 99 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99 pruub=15.273898125s) [0] r=-1 lpr=99 pi=[75,99)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 200.589569092s@ mbc={}] enter Started/Stray
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 87) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:23:51.817847+0000 osd.2 (osd.2) 86 : cluster [DBG] 3.5 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:23:51.832094+0000 osd.2 (osd.2) 87 : cluster [DBG] 3.5 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 99 handle_osd_map epochs [98,99], i have 99, src has [1,99]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:23.404599+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:23:52.781301+0000 osd.2 (osd.2) 88 : cluster [DBG] 7.8 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:23:52.795451+0000 osd.2 (osd.2) 89 : cluster [DBG] 7.8 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 728641 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 1122304 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.a scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.a scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 89) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:23:52.781301+0000 osd.2 (osd.2) 88 : cluster [DBG] 7.8 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:23:52.795451+0000 osd.2 (osd.2) 89 : cluster [DBG] 7.8 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 100 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99) [0] r=-1 lpr=99 pi=[75,99)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.033556 3 0.000138
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 100 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99) [0] r=-1 lpr=99 pi=[75,99)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.033657 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 100 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99) [0] r=-1 lpr=99 pi=[75,99)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 100 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 100 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped mbc={}] exit Reset 0.000054 1 0.000086
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 100 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 100 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Start
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 100 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 100 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 100 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 100 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 100 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 100 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000025 1 0.000045
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 100 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 100 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000024 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 100 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 100 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 100 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:24.404771+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:23:53.820439+0000 osd.2 (osd.2) 90 : cluster [DBG] 7.a scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:23:53.834589+0000 osd.2 (osd.2) 91 : cluster [DBG] 7.a scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63864832 unmapped: 1089536 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 91) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:23:53.820439+0000 osd.2 (osd.2) 90 : cluster [DBG] 7.a scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:23:53.834589+0000 osd.2 (osd.2) 91 : cluster [DBG] 7.a scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 100 handle_osd_map epochs [100,101], i have 100, src has [1,101]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 100 handle_osd_map epochs [101,101], i have 101, src has [1,101]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 101 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999801 4 0.000060
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 101 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.999900 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 101 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 101 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 101 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 101 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.013186 5 0.000261
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 101 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 101 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000084 1 0.000069
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 101 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 101 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000420 1 0.000080
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 101 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 101 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.059480 2 0.000081
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 101 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:25.404962+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 983040 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 101 heartbeat osd_stat(store_statfs(0x4fcf00000/0x0/0x4ffc00000, data 0x8bafa/0x12c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 102 handle_osd_map epochs [102,102], i have 102, src has [1,102]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.933730 1 0.000084
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary/Active 1.007280 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary 2.007204 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started 2.007239 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] enter Reset
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102 pruub=15.005826950s) [0] async=[0] r=-1 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 55'385 active pruub 203.362533569s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102 pruub=15.005747795s) [0] r=-1 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 203.362533569s@ mbc={}] exit Reset 0.000124 1 0.000300
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102 pruub=15.005747795s) [0] r=-1 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 203.362533569s@ mbc={}] enter Started
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102 pruub=15.005747795s) [0] r=-1 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 203.362533569s@ mbc={}] enter Start
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102 pruub=15.005747795s) [0] r=-1 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 203.362533569s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102 pruub=15.005747795s) [0] r=-1 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 203.362533569s@ mbc={}] exit Start 0.000010 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102 pruub=15.005747795s) [0] r=-1 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 203.362533569s@ mbc={}] enter Started/Stray
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:26.405188+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 974848 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 102 handle_osd_map epochs [102,103], i have 102, src has [1,103]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 103 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=-1 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.035288 7 0.000140
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 103 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=-1 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 103 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=-1 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 103 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=-1 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000064 1 0.000092
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 103 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=-1 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 103 pg[9.16( v 55'385 (0'0,55'385] lb MIN local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=-1 lpr=102 DELETING pi=[75,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.035309 2 0.000224
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 103 pg[9.16( v 55'385 (0'0,55'385] lb MIN local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=-1 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.035424 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 103 pg[9.16( v 55'385 (0'0,55'385] lb MIN local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=-1 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.070788 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:27.405322+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 933888 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:28.405504+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 733775 data_alloc: 218103808 data_used: 184320
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 933888 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:29.405678+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 103 heartbeat osd_stat(store_statfs(0x4fcefc000/0x0/0x4ffc00000, data 0x8ef2c/0x131000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 933888 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:30.406035+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:23:59.810423+0000 osd.2 (osd.2) 92 : cluster [DBG] 7.5 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:23:59.824545+0000 osd.2 (osd.2) 93 : cluster [DBG] 7.5 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 103 heartbeat osd_stat(store_statfs(0x4fcefc000/0x0/0x4ffc00000, data 0x8ef2c/0x131000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 909312 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 93) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:23:59.810423+0000 osd.2 (osd.2) 92 : cluster [DBG] 7.5 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:23:59.824545+0000 osd.2 (osd.2) 93 : cluster [DBG] 7.5 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:31.406282+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 909312 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.501713753s of 11.634410858s, submitted: 37
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:32.406465+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:24:01.785965+0000 osd.2 (osd.2) 94 : cluster [DBG] 7.1 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:24:01.799997+0000 osd.2 (osd.2) 95 : cluster [DBG] 7.1 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 909312 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 95) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:24:01.785965+0000 osd.2 (osd.2) 94 : cluster [DBG] 7.1 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:24:01.799997+0000 osd.2 (osd.2) 95 : cluster [DBG] 7.1 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:33.406644+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 739363 data_alloc: 218103808 data_used: 192512
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 851968 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 104 heartbeat osd_stat(store_statfs(0x4fcef9000/0x0/0x4ffc00000, data 0x90aa9/0x134000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:34.406768+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 811008 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:35.406947+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:24:04.822462+0000 osd.2 (osd.2) 96 : cluster [DBG] 7.2 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:24:04.836655+0000 osd.2 (osd.2) 97 : cluster [DBG] 7.2 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 106 pg[9.19(unlocked)] enter Initial
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 106 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=0 lpr=0 pi=[67,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000066 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 106 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=0 lpr=0 pi=[67,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 106 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=0 lpr=106 pi=[67,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000023 1 0.000046
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 106 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=0 lpr=106 pi=[67,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 106 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=0 lpr=106 pi=[67,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 106 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=0 lpr=106 pi=[67,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 106 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=0 lpr=106 pi=[67,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000074 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 106 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=0 lpr=106 pi=[67,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 106 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=0 lpr=106 pi=[67,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 106 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=0 lpr=106 pi=[67,106)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 106 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=0 lpr=106 pi=[67,106)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000115 1 0.000179
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 106 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=0 lpr=106 pi=[67,106)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 106 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=0 lpr=106 pi=[67,106)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000035 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 106 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=0 lpr=106 pi=[67,106)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000198 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 106 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=0 lpr=106 pi=[67,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 106 handle_osd_map epochs [106,106], i have 106, src has [1,106]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 802816 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 97) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:24:04.822462+0000 osd.2 (osd.2) 96 : cluster [DBG] 7.2 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:24:04.836655+0000 osd.2 (osd.2) 97 : cluster [DBG] 7.2 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 106 handle_osd_map epochs [106,107], i have 106, src has [1,107]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=0 lpr=106 pi=[67,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.827240 2 0.000244
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=0 lpr=106 pi=[67,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.827504 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=0 lpr=106 pi=[67,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.827624 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=0 lpr=106 pi=[67,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000101 1 0.000156
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000011 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 107 handle_osd_map epochs [107,107], i have 107, src has [1,107]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:36.407129+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 107 heartbeat osd_stat(store_statfs(0x4fcef1000/0x0/0x4ffc00000, data 0x941a3/0x13a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 64233472 unmapped: 720896 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.8 deep-scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.8 deep-scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:37.407258+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:24:06.864509+0000 osd.2 (osd.2) 98 : cluster [DBG] 3.8 deep-scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:24:06.878662+0000 osd.2 (osd.2) 99 : cluster [DBG] 3.8 deep-scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 64249856 unmapped: 704512 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 107 handle_osd_map epochs [108,108], i have 107, src has [1,108]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 108 pg[9.19( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.460468 5 0.000086
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 108 pg[9.19( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 108 pg[9.19( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: not registered w/ OSD
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 108 pg[9.19( v 55'385 lc 55'61 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.006087 4 0.000195
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 108 pg[9.19( v 55'385 lc 55'61 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 108 pg[9.19( v 55'385 lc 55'61 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000128 1 0.000039
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 108 pg[9.19( v 55'385 lc 55'61 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 108 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.061598 1 0.000039
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 108 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 108 heartbeat osd_stat(store_statfs(0x4fcef0000/0x0/0x4ffc00000, data 0x95c08/0x13d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 99) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:24:06.864509+0000 osd.2 (osd.2) 98 : cluster [DBG] 3.8 deep-scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:24:06.878662+0000 osd.2 (osd.2) 99 : cluster [DBG] 3.8 deep-scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.486427 1 0.000026
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.554380 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started 2.014903 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 unknown mbc={}] exit Reset 0.000102 1 0.000190
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Start
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 unknown mbc={}] exit Start 0.000014 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000036 1 0.000057
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: merge_log_dups log.dups.size()=0olog.dups.size()=15
Nov 24 18:51:49 compute-0 ceph-osd[90655]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=15
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001631 3 0.000127
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000020 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:38.407430+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 770148 data_alloc: 218103808 data_used: 192512
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65363968 unmapped: 638976 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fcee7000/0x0/0x4ffc00000, data 0x9925c/0x144000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:39.407574+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 109 handle_osd_map epochs [109,110], i have 109, src has [1,110]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 109 handle_osd_map epochs [109,110], i have 110, src has [1,110]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 110 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.016149 2 0.000213
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 110 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.017963 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 110 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 110 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=109/110 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 110 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=109/110 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 110 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=109/110 n=5 ec=59/49 lis/c=109/67 les/c/f=110/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003511 3 0.000119
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 110 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=109/110 n=5 ec=59/49 lis/c=109/67 les/c/f=110/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 110 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=109/110 n=5 ec=59/49 lis/c=109/67 les/c/f=110/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 110 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=109/110 n=5 ec=59/49 lis/c=109/67 les/c/f=110/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 110 handle_osd_map epochs [110,110], i have 110, src has [1,110]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65388544 unmapped: 614400 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:40.407755+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65404928 unmapped: 598016 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:41.407881+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65404928 unmapped: 598016 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:42.408125+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65413120 unmapped: 589824 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:43.408289+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771602 data_alloc: 218103808 data_used: 192512
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65413120 unmapped: 589824 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.848556519s of 11.976703644s, submitted: 56
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fcee6000/0x0/0x4ffc00000, data 0x9ade1/0x147000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 110 handle_osd_map epochs [111,111], i have 110, src has [1,111]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 110 handle_osd_map epochs [111,112], i have 111, src has [1,112]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 111 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=86) [2] r=0 lpr=86 crt=55'385 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 45.396604 76 0.000290
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 111 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=86) [2] r=0 lpr=86 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active 45.400260 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 111 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=86) [2] r=0 lpr=86 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary 46.315436 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 111 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=86) [2] r=0 lpr=86 crt=55'385 mlcod 0'0 active mbc={}] exit Started 46.315458 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 111 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=86) [2] r=0 lpr=86 crt=55'385 mlcod 0'0 active mbc={}] enter Reset
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 111 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111 pruub=10.604092598s) [0] r=-1 lpr=111 pi=[86,111)/1 crt=55'385 mlcod 0'0 active pruub 216.645172119s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 111 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111 pruub=10.603497505s) [0] r=-1 lpr=111 pi=[86,111)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 216.645172119s@ mbc={}] exit Reset 0.000684 1 0.000752
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 111 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111 pruub=10.603497505s) [0] r=-1 lpr=111 pi=[86,111)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 216.645172119s@ mbc={}] enter Started
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 111 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111 pruub=10.603497505s) [0] r=-1 lpr=111 pi=[86,111)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 216.645172119s@ mbc={}] enter Start
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 111 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111 pruub=10.603497505s) [0] r=-1 lpr=111 pi=[86,111)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 216.645172119s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 111 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111 pruub=10.603497505s) [0] r=-1 lpr=111 pi=[86,111)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 216.645172119s@ mbc={}] exit Start 0.000010 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 111 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111 pruub=10.603497505s) [0] r=-1 lpr=111 pi=[86,111)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 216.645172119s@ mbc={}] enter Started/Stray
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:44.408458+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:24:13.762656+0000 osd.2 (osd.2) 100 : cluster [DBG] 3.7 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:24:13.776808+0000 osd.2 (osd.2) 101 : cluster [DBG] 3.7 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 112 handle_osd_map epochs [112,113], i have 112, src has [1,113]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 101) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:24:13.762656+0000 osd.2 (osd.2) 100 : cluster [DBG] 3.7 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:24:13.776808+0000 osd.2 (osd.2) 101 : cluster [DBG] 3.7 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 113 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111) [0] r=-1 lpr=111 pi=[86,111)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.509659 6 0.000066
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 113 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111) [0] r=-1 lpr=111 pi=[86,111)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.509764 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 113 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111) [0] r=-1 lpr=111 pi=[86,111)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 113 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 113 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped mbc={}] exit Reset 0.000578 1 0.000705
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 113 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 113 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Start
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 113 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 113 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped mbc={}] exit Start 0.000196 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 113 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 113 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 113 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 113 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001175 2 0.000629
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 113 handle_osd_map epochs [113,113], i have 113, src has [1,113]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 113 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 113 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000128 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 113 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 113 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000036 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 113 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65462272 unmapped: 540672 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.c deep-scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.c deep-scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:45.408654+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:24:14.757285+0000 osd.2 (osd.2) 102 : cluster [DBG] 7.c deep-scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:24:14.771334+0000 osd.2 (osd.2) 103 : cluster [DBG] 7.c deep-scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 113 handle_osd_map epochs [113,114], i have 113, src has [1,114]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 103) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:24:14.757285+0000 osd.2 (osd.2) 102 : cluster [DBG] 7.c deep-scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:24:14.771334+0000 osd.2 (osd.2) 103 : cluster [DBG] 7.c deep-scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.011142 3 0.000351
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.012752 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=75) [2] r=0 lpr=75 crt=55'385 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 62.976404 117 0.000311
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=75) [2] r=0 lpr=75 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active 62.985318 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=75) [2] r=0 lpr=75 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary 63.997906 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=75) [2] r=0 lpr=75 crt=55'385 mlcod 0'0 active mbc={}] exit Started 63.997938 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=75) [2] r=0 lpr=75 crt=55'385 mlcod 0'0 active mbc={}] enter Reset
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114 pruub=9.023878098s) [0] r=-1 lpr=114 pi=[75,114)/1 crt=55'385 mlcod 0'0 active pruub 216.589828491s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114 pruub=9.023796082s) [0] r=-1 lpr=114 pi=[75,114)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 216.589828491s@ mbc={}] exit Reset 0.000127 1 0.000797
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114 pruub=9.023796082s) [0] r=-1 lpr=114 pi=[75,114)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 216.589828491s@ mbc={}] enter Started
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114 pruub=9.023796082s) [0] r=-1 lpr=114 pi=[75,114)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 216.589828491s@ mbc={}] enter Start
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114 pruub=9.023796082s) [0] r=-1 lpr=114 pi=[75,114)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 216.589828491s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114 pruub=9.023796082s) [0] r=-1 lpr=114 pi=[75,114)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 216.589828491s@ mbc={}] exit Start 0.000011 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114 pruub=9.023796082s) [0] r=-1 lpr=114 pi=[75,114)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 216.589828491s@ mbc={}] enter Started/Stray
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 114 handle_osd_map epochs [114,114], i have 114, src has [1,114]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.013166 5 0.000825
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000068 1 0.000067
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000538 1 0.000053
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.059238 2 0.000058
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65331200 unmapped: 671744 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:46.408827+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 114 handle_osd_map epochs [114,115], i have 114, src has [1,115]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.941351 1 0.000139
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary/Active 1.014739 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary 2.027567 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started 2.027881 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] enter Reset
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=-1 lpr=114 pi=[75,114)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.013965 3 0.000291
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=-1 lpr=114 pi=[75,114)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.014015 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115 pruub=14.998288155s) [0] async=[0] r=-1 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 55'385 active pruub 223.578338623s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=-1 lpr=114 pi=[75,114)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115 pruub=14.998158455s) [0] r=-1 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 223.578338623s@ mbc={}] exit Reset 0.000170 1 0.000230
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115 pruub=14.998158455s) [0] r=-1 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 223.578338623s@ mbc={}] enter Started
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115 pruub=14.998158455s) [0] r=-1 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 223.578338623s@ mbc={}] enter Start
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115 pruub=14.998158455s) [0] r=-1 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 223.578338623s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115 pruub=14.998158455s) [0] r=-1 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 223.578338623s@ mbc={}] exit Start 0.000015 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115 pruub=14.998158455s) [0] r=-1 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 223.578338623s@ mbc={}] enter Started/Stray
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped mbc={}] exit Reset 0.000426 1 0.000457
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Start
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped mbc={}] exit Start 0.000006 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 115 handle_osd_map epochs [115,115], i have 115, src has [1,115]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.012417 2 0.000571
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 115 handle_osd_map epochs [115,115], i have 115, src has [1,115]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000036 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65331200 unmapped: 671744 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:47.408958+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fced6000/0x0/0x4ffc00000, data 0xa3560/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 115 handle_osd_map epochs [116,116], i have 115, src has [1,116]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 115 handle_osd_map epochs [116,116], i have 116, src has [1,116]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996884 3 0.000116
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.009410 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=76) [2] r=0 lpr=76 crt=55'385 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 63.991223 121 0.000364
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=76) [2] r=0 lpr=76 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active 63.998521 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=76) [2] r=0 lpr=76 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary 65.010268 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=76) [2] r=0 lpr=76 crt=55'385 mlcod 0'0 active mbc={}] exit Started 65.010291 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=76) [2] r=0 lpr=76 crt=55'385 mlcod 0'0 active mbc={}] enter Reset
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116 pruub=8.010222435s) [1] r=-1 lpr=116 pi=[76,116)/1 crt=55'385 mlcod 0'0 active pruub 217.601470947s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116 pruub=8.010166168s) [1] r=-1 lpr=116 pi=[76,116)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 217.601470947s@ mbc={}] exit Reset 0.000095 1 0.000145
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116 pruub=8.010166168s) [1] r=-1 lpr=116 pi=[76,116)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 217.601470947s@ mbc={}] enter Started
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116 pruub=8.010166168s) [1] r=-1 lpr=116 pi=[76,116)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 217.601470947s@ mbc={}] enter Start
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116 pruub=8.010166168s) [1] r=-1 lpr=116 pi=[76,116)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 217.601470947s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116 pruub=8.010166168s) [1] r=-1 lpr=116 pi=[76,116)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 217.601470947s@ mbc={}] exit Start 0.000014 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116 pruub=8.010166168s) [1] r=-1 lpr=116 pi=[76,116)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 217.601470947s@ mbc={}] enter Started/Stray
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=-1 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.015727 7 0.000122
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=-1 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=-1 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=-1 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000495 1 0.000066
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=-1 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.006072 5 0.000268
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000068 1 0.000028
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000467 1 0.000055
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 116 handle_osd_map epochs [116,116], i have 116, src has [1,116]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1c( v 55'385 (0'0,55'385] lb MIN local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=-1 lpr=115 DELETING pi=[86,115)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.064913 2 0.000212
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1c( v 55'385 (0'0,55'385] lb MIN local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=-1 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.065460 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1c( v 55'385 (0'0,55'385] lb MIN local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=-1 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.081255 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.095381 2 0.000049
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.e scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.e scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 630784 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:48.409070+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:24:17.663847+0000 osd.2 (osd.2) 104 : cluster [DBG] 7.e scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:24:17.677999+0000 osd.2 (osd.2) 105 : cluster [DBG] 7.e scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 116 handle_osd_map epochs [116,117], i have 116, src has [1,117]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 116 handle_osd_map epochs [117,117], i have 117, src has [1,117]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.911250 1 0.000081
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=-1 lpr=116 pi=[76,116)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.012657 3 0.000072
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=-1 lpr=116 pi=[76,116)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.012714 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=-1 lpr=116 pi=[76,116)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped mbc={}] exit Reset 0.000084 1 0.000119
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Start
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped mbc={}] exit Start 0.000012 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 105) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:24:17.663847+0000 osd.2 (osd.2) 104 : cluster [DBG] 7.e scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:24:17.677999+0000 osd.2 (osd.2) 105 : cluster [DBG] 7.e scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary/Active 1.014406 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary 2.023856 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started 2.024405 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] enter Reset
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117 pruub=14.991624832s) [0] async=[0] r=-1 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 55'385 active pruub 225.596588135s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117 pruub=14.991396904s) [0] r=-1 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 225.596588135s@ mbc={}] exit Reset 0.000270 1 0.001286
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117 pruub=14.991396904s) [0] r=-1 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 225.596588135s@ mbc={}] enter Started
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117 pruub=14.991396904s) [0] r=-1 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 225.596588135s@ mbc={}] enter Start
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117 pruub=14.991396904s) [0] r=-1 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 225.596588135s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117 pruub=14.991396904s) [0] r=-1 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 225.596588135s@ mbc={}] exit Start 0.000100 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117 pruub=14.991396904s) [0] r=-1 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 225.596588135s@ mbc={}] enter Started/Stray
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001414 2 0.000252
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000039 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 117 handle_osd_map epochs [117,117], i have 117, src has [1,117]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 117 handle_osd_map epochs [117,117], i have 117, src has [1,117]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784560 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65445888 unmapped: 557056 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:49.409233+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 117 handle_osd_map epochs [117,118], i have 117, src has [1,118]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 117 handle_osd_map epochs [118,118], i have 118, src has [1,118]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.017639 3 0.000119
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.019303 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=-1 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.034310 7 0.000306
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=-1 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=-1 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=-1 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000107 1 0.000052
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=-1 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1e( v 55'385 (0'0,55'385] lb MIN local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=-1 lpr=117 DELETING pi=[75,117)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.040892 2 0.000241
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1e( v 55'385 (0'0,55'385] lb MIN local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=-1 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.041043 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1e( v 55'385 (0'0,55'385] lb MIN local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=-1 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.075523 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65445888 unmapped: 557056 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:50.409363+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 118 handle_osd_map epochs [118,118], i have 118, src has [1,118]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.914904 5 0.001236
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000132 1 0.000159
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000743 1 0.000056
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.043437 2 0.000163
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 118 handle_osd_map epochs [119,119], i have 118, src has [1,119]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.054906 1 0.000239
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary/Active 1.014501 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary 2.033843 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started 2.034018 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] enter Reset
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119 pruub=15.899600029s) [1] async=[1] r=-1 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 55'385 active pruub 228.537811279s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119 pruub=15.899522781s) [1] r=-1 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 228.537811279s@ mbc={}] exit Reset 0.000113 1 0.000183
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119 pruub=15.899522781s) [1] r=-1 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 228.537811279s@ mbc={}] enter Started
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119 pruub=15.899522781s) [1] r=-1 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 228.537811279s@ mbc={}] enter Start
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119 pruub=15.899522781s) [1] r=-1 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 228.537811279s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119 pruub=15.899522781s) [1] r=-1 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 228.537811279s@ mbc={}] exit Start 0.000007 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119 pruub=15.899522781s) [1] r=-1 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 228.537811279s@ mbc={}] enter Started/Stray
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 119 handle_osd_map epochs [119,119], i have 119, src has [1,119]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65462272 unmapped: 540672 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:51.409481+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 119 handle_osd_map epochs [120,120], i have 119, src has [1,120]
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 120 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=-1 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.128202 6 0.000139
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 120 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=-1 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 120 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=-1 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 120 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=-1 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000415 2 0.000053
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 120 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=-1 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65462272 unmapped: 540672 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 120 pg[9.1f( v 55'385 (0'0,55'385] lb MIN local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=-1 lpr=119 DELETING pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.042811 2 0.000167
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 120 pg[9.1f( v 55'385 (0'0,55'385] lb MIN local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=-1 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.043258 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 pg_epoch: 120 pg[9.1f( v 55'385 (0'0,55'385] lb MIN local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=-1 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.171507 0 0.000000
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:52.409685+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65454080 unmapped: 548864 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fceca000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:53.409881+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 775471 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65462272 unmapped: 540672 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:54.410081+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65462272 unmapped: 540672 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:55.410233+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.599168777s of 11.808244705s, submitted: 61
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65470464 unmapped: 532480 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:56.410405+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:24:25.570864+0000 osd.2 (osd.2) 106 : cluster [DBG] 3.1d scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:24:25.584976+0000 osd.2 (osd.2) 107 : cluster [DBG] 3.1d scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 107) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:24:25.570864+0000 osd.2 (osd.2) 106 : cluster [DBG] 3.1d scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:24:25.584976+0000 osd.2 (osd.2) 107 : cluster [DBG] 3.1d scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65486848 unmapped: 516096 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:57.410661+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65486848 unmapped: 516096 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcecb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:58.410794+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 775739 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65519616 unmapped: 483328 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:59.410978+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65519616 unmapped: 483328 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcecb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:00.411111+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65519616 unmapped: 483328 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcecb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:01.411248+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:24:30.582266+0000 osd.2 (osd.2) 108 : cluster [DBG] 7.1a scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:24:30.596512+0000 osd.2 (osd.2) 109 : cluster [DBG] 7.1a scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 109) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:24:30.582266+0000 osd.2 (osd.2) 108 : cluster [DBG] 7.1a scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:24:30.596512+0000 osd.2 (osd.2) 109 : cluster [DBG] 7.1a scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65527808 unmapped: 475136 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:02.411425+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 475136 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:03.411570+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:24:32.675143+0000 osd.2 (osd.2) 110 : cluster [DBG] 3.1e scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:24:32.689239+0000 osd.2 (osd.2) 111 : cluster [DBG] 3.1e scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 778035 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 111) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:24:32.675143+0000 osd.2 (osd.2) 110 : cluster [DBG] 3.1e scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:24:32.689239+0000 osd.2 (osd.2) 111 : cluster [DBG] 3.1e scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 475136 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:04.411810+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 450560 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:05.412004+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:24:34.763302+0000 osd.2 (osd.2) 112 : cluster [DBG] 10.3 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:24:34.780972+0000 osd.2 (osd.2) 113 : cluster [DBG] 10.3 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 113) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:24:34.763302+0000 osd.2 (osd.2) 112 : cluster [DBG] 10.3 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:24:34.780972+0000 osd.2 (osd.2) 113 : cluster [DBG] 10.3 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.5 deep-scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.116951942s of 10.145874023s, submitted: 8
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.5 deep-scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65560576 unmapped: 1490944 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:06.412239+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:24:35.716821+0000 osd.2 (osd.2) 114 : cluster [DBG] 10.5 deep-scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:24:35.730943+0000 osd.2 (osd.2) 115 : cluster [DBG] 10.5 deep-scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 115) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:24:35.716821+0000 osd.2 (osd.2) 114 : cluster [DBG] 10.5 deep-scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:24:35.730943+0000 osd.2 (osd.2) 115 : cluster [DBG] 10.5 deep-scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65568768 unmapped: 1482752 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:07.412421+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.a scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.a scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65568768 unmapped: 1482752 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:08.412537+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:24:37.718552+0000 osd.2 (osd.2) 116 : cluster [DBG] 10.a scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:24:37.732661+0000 osd.2 (osd.2) 117 : cluster [DBG] 10.a scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 781479 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 117) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:24:37.718552+0000 osd.2 (osd.2) 116 : cluster [DBG] 10.a scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:24:37.732661+0000 osd.2 (osd.2) 117 : cluster [DBG] 10.a scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65568768 unmapped: 1482752 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:09.412681+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.c scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.c scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65576960 unmapped: 1474560 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:10.412974+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 119 sent 117 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:24:39.729441+0000 osd.2 (osd.2) 118 : cluster [DBG] 10.c scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:24:39.743608+0000 osd.2 (osd.2) 119 : cluster [DBG] 10.c scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 119) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:24:39.729441+0000 osd.2 (osd.2) 118 : cluster [DBG] 10.c scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:24:39.743608+0000 osd.2 (osd.2) 119 : cluster [DBG] 10.c scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65576960 unmapped: 1474560 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:11.413249+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65585152 unmapped: 1466368 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:12.413450+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:24:41.669377+0000 osd.2 (osd.2) 120 : cluster [DBG] 10.18 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:24:41.683353+0000 osd.2 (osd.2) 121 : cluster [DBG] 10.18 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 121) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:24:41.669377+0000 osd.2 (osd.2) 120 : cluster [DBG] 10.18 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:24:41.683353+0000 osd.2 (osd.2) 121 : cluster [DBG] 10.18 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65585152 unmapped: 1466368 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:13.413693+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:24:42.658256+0000 osd.2 (osd.2) 122 : cluster [DBG] 10.1b scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:24:42.672423+0000 osd.2 (osd.2) 123 : cluster [DBG] 10.1b scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784925 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 123) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:24:42.658256+0000 osd.2 (osd.2) 122 : cluster [DBG] 10.1b scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:24:42.672423+0000 osd.2 (osd.2) 123 : cluster [DBG] 10.1b scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65593344 unmapped: 1458176 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:14.413957+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65593344 unmapped: 1458176 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:15.414123+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65593344 unmapped: 1458176 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:16.414250+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.908637047s of 10.939035416s, submitted: 10
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65609728 unmapped: 1441792 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:17.414421+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:24:46.655854+0000 osd.2 (osd.2) 124 : cluster [DBG] 10.1c scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:24:46.669967+0000 osd.2 (osd.2) 125 : cluster [DBG] 10.1c scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 125) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:24:46.655854+0000 osd.2 (osd.2) 124 : cluster [DBG] 10.1c scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:24:46.669967+0000 osd.2 (osd.2) 125 : cluster [DBG] 10.1c scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65617920 unmapped: 1433600 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:18.414653+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 787223 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65634304 unmapped: 1417216 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:19.414804+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:24:48.632077+0000 osd.2 (osd.2) 126 : cluster [DBG] 10.1d scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:24:48.646106+0000 osd.2 (osd.2) 127 : cluster [DBG] 10.1d scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 127) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:24:48.632077+0000 osd.2 (osd.2) 126 : cluster [DBG] 10.1d scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:24:48.646106+0000 osd.2 (osd.2) 127 : cluster [DBG] 10.1d scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65634304 unmapped: 1417216 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:20.415025+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65634304 unmapped: 1417216 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:21.415234+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:24:50.687817+0000 osd.2 (osd.2) 128 : cluster [DBG] 10.1f scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:24:50.705406+0000 osd.2 (osd.2) 129 : cluster [DBG] 10.1f scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 129) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:24:50.687817+0000 osd.2 (osd.2) 128 : cluster [DBG] 10.1f scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:24:50.705406+0000 osd.2 (osd.2) 129 : cluster [DBG] 10.1f scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 1409024 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:22.415468+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 1409024 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:23.415599+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 788372 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 1409024 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:24.415757+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 1400832 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:25.415877+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:24:54.748532+0000 osd.2 (osd.2) 130 : cluster [DBG] 8.15 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:24:54.766203+0000 osd.2 (osd.2) 131 : cluster [DBG] 8.15 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 1409024 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 131) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:24:54.748532+0000 osd.2 (osd.2) 130 : cluster [DBG] 8.15 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:24:54.766203+0000 osd.2 (osd.2) 131 : cluster [DBG] 8.15 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:26.416051+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:24:55.776442+0000 osd.2 (osd.2) 132 : cluster [DBG] 11.15 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:24:55.790562+0000 osd.2 (osd.2) 133 : cluster [DBG] 11.15 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 1400832 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 133) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:24:55.776442+0000 osd.2 (osd.2) 132 : cluster [DBG] 11.15 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:24:55.790562+0000 osd.2 (osd.2) 133 : cluster [DBG] 11.15 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:27.416220+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 1400832 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:28.416352+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 790669 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 1400832 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:29.416492+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65658880 unmapped: 1392640 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:30.416638+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.093049049s of 14.133566856s, submitted: 10
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65658880 unmapped: 1392640 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:31.416951+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:25:00.789467+0000 osd.2 (osd.2) 134 : cluster [DBG] 11.2 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:25:00.803693+0000 osd.2 (osd.2) 135 : cluster [DBG] 11.2 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65667072 unmapped: 1384448 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 135) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:25:00.789467+0000 osd.2 (osd.2) 134 : cluster [DBG] 11.2 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:25:00.803693+0000 osd.2 (osd.2) 135 : cluster [DBG] 11.2 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:32.417144+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:25:01.816707+0000 osd.2 (osd.2) 136 : cluster [DBG] 11.3 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:25:01.830807+0000 osd.2 (osd.2) 137 : cluster [DBG] 11.3 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65675264 unmapped: 1376256 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 137) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:25:01.816707+0000 osd.2 (osd.2) 136 : cluster [DBG] 11.3 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:25:01.830807+0000 osd.2 (osd.2) 137 : cluster [DBG] 11.3 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:33.417370+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:25:02.865942+0000 osd.2 (osd.2) 138 : cluster [DBG] 8.2 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:25:02.879956+0000 osd.2 (osd.2) 139 : cluster [DBG] 8.2 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 794112 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65675264 unmapped: 1376256 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 139) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:25:02.865942+0000 osd.2 (osd.2) 138 : cluster [DBG] 8.2 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:25:02.879956+0000 osd.2 (osd.2) 139 : cluster [DBG] 8.2 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:34.417564+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65683456 unmapped: 1368064 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:35.417713+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65683456 unmapped: 1368064 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:36.417841+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65691648 unmapped: 1359872 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:37.417967+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65691648 unmapped: 1359872 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:38.418120+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 794112 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.d scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65716224 unmapped: 1335296 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.d scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:39.418247+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 141 sent 139 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:25:08.865484+0000 osd.2 (osd.2) 140 : cluster [DBG] 11.d scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:25:08.879452+0000 osd.2 (osd.2) 141 : cluster [DBG] 11.d scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65724416 unmapped: 1327104 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 141) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:25:08.865484+0000 osd.2 (osd.2) 140 : cluster [DBG] 11.d scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:25:08.879452+0000 osd.2 (osd.2) 141 : cluster [DBG] 11.d scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:40.418403+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 143 sent 141 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:25:09.879057+0000 osd.2 (osd.2) 142 : cluster [DBG] 11.8 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:25:09.893103+0000 osd.2 (osd.2) 143 : cluster [DBG] 11.8 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65724416 unmapped: 1327104 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 143) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:25:09.879057+0000 osd.2 (osd.2) 142 : cluster [DBG] 11.8 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:25:09.893103+0000 osd.2 (osd.2) 143 : cluster [DBG] 11.8 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:41.418612+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65724416 unmapped: 1327104 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:42.418942+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65732608 unmapped: 1318912 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:43.419087+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796408 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65732608 unmapped: 1318912 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.d scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.196287155s of 13.231869698s, submitted: 10
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.d scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:44.419224+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 145 sent 143 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:25:14.021616+0000 osd.2 (osd.2) 144 : cluster [DBG] 8.d scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:25:14.035608+0000 osd.2 (osd.2) 145 : cluster [DBG] 8.d scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65740800 unmapped: 1310720 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 145) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:25:14.021616+0000 osd.2 (osd.2) 144 : cluster [DBG] 8.d scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:25:14.035608+0000 osd.2 (osd.2) 145 : cluster [DBG] 8.d scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:45.419470+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65740800 unmapped: 1310720 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:46.419703+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65740800 unmapped: 1310720 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:47.419942+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 147 sent 145 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:25:16.973300+0000 osd.2 (osd.2) 146 : cluster [DBG] 11.9 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:25:16.990961+0000 osd.2 (osd.2) 147 : cluster [DBG] 11.9 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65748992 unmapped: 1302528 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 147) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:25:16.973300+0000 osd.2 (osd.2) 146 : cluster [DBG] 11.9 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:25:16.990961+0000 osd.2 (osd.2) 147 : cluster [DBG] 11.9 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:48.420304+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 149 sent 147 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:25:17.974321+0000 osd.2 (osd.2) 148 : cluster [DBG] 11.18 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:25:17.988388+0000 osd.2 (osd.2) 149 : cluster [DBG] 11.18 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 799852 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65748992 unmapped: 1302528 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 149) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:25:17.974321+0000 osd.2 (osd.2) 148 : cluster [DBG] 11.18 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:25:17.988388+0000 osd.2 (osd.2) 149 : cluster [DBG] 11.18 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:49.420462+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65748992 unmapped: 1302528 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:50.420632+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 1286144 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:51.420755+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 1286144 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:52.420971+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 1286144 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:53.421138+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 151 sent 149 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:25:22.906623+0000 osd.2 (osd.2) 150 : cluster [DBG] 8.4 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:25:22.920844+0000 osd.2 (osd.2) 151 : cluster [DBG] 8.4 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 800999 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65773568 unmapped: 1277952 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.1b deep-scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.1b deep-scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 151) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:25:22.906623+0000 osd.2 (osd.2) 150 : cluster [DBG] 8.4 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:25:22.920844+0000 osd.2 (osd.2) 151 : cluster [DBG] 8.4 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:54.421282+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 153 sent 151 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:25:23.934608+0000 osd.2 (osd.2) 152 : cluster [DBG] 11.1b deep-scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:25:23.948704+0000 osd.2 (osd.2) 153 : cluster [DBG] 11.1b deep-scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65773568 unmapped: 1277952 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 153) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:25:23.934608+0000 osd.2 (osd.2) 152 : cluster [DBG] 11.1b deep-scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:25:23.948704+0000 osd.2 (osd.2) 153 : cluster [DBG] 11.1b deep-scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:55.421453+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 1269760 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:56.421577+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 1269760 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:57.421740+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65789952 unmapped: 1261568 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:58.421880+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 802148 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 1236992 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.896893501s of 14.935640335s, submitted: 10
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:59.422053+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 155 sent 153 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:25:28.956997+0000 osd.2 (osd.2) 154 : cluster [DBG] 11.1e scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:25:28.971144+0000 osd.2 (osd.2) 155 : cluster [DBG] 11.1e scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 155) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:25:28.956997+0000 osd.2 (osd.2) 154 : cluster [DBG] 11.1e scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:25:28.971144+0000 osd.2 (osd.2) 155 : cluster [DBG] 11.1e scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 1236992 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:00.422312+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:25:29.935367+0000 osd.2 (osd.2) 156 : cluster [DBG] 11.1c scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:25:29.949582+0000 osd.2 (osd.2) 157 : cluster [DBG] 11.1c scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 157) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:25:29.935367+0000 osd.2 (osd.2) 156 : cluster [DBG] 11.1c scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:25:29.949582+0000 osd.2 (osd.2) 157 : cluster [DBG] 11.1c scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65830912 unmapped: 1220608 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:01.422480+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 159 sent 157 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:25:30.940941+0000 osd.2 (osd.2) 158 : cluster [DBG] 8.12 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:25:30.955016+0000 osd.2 (osd.2) 159 : cluster [DBG] 8.12 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 159) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:25:30.940941+0000 osd.2 (osd.2) 158 : cluster [DBG] 8.12 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:25:30.955016+0000 osd.2 (osd.2) 159 : cluster [DBG] 8.12 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65839104 unmapped: 1212416 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:02.422796+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 1204224 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:03.422950+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 805594 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 1204224 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:04.423071+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 1204224 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:05.423211+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 1196032 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:06.423341+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 1196032 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:07.423482+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 1196032 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:08.423616+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 805594 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65863680 unmapped: 1187840 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:09.423775+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.869328499s of 10.890173912s, submitted: 6
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65863680 unmapped: 1187840 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:10.423939+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:25:39.847229+0000 osd.2 (osd.2) 160 : cluster [DBG] 11.11 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:25:39.861327+0000 osd.2 (osd.2) 161 : cluster [DBG] 11.11 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 1179648 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 161) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:25:39.847229+0000 osd.2 (osd.2) 160 : cluster [DBG] 11.11 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:25:39.861327+0000 osd.2 (osd.2) 161 : cluster [DBG] 11.11 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:11.424097+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 1171456 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:12.424242+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 1171456 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:13.424360+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 806743 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 1171456 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:14.424520+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65888256 unmapped: 1163264 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:15.424660+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65888256 unmapped: 1163264 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:16.424775+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65896448 unmapped: 1155072 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:17.424915+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65896448 unmapped: 1155072 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:18.425045+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 806743 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65912832 unmapped: 1138688 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:19.425207+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65921024 unmapped: 1130496 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:20.425340+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65921024 unmapped: 1130496 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:21.425505+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.008776665s of 12.022459984s, submitted: 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65929216 unmapped: 1122304 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:22.425667+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:25:51.869752+0000 osd.2 (osd.2) 162 : cluster [DBG] 8.1b scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:25:51.883829+0000 osd.2 (osd.2) 163 : cluster [DBG] 8.1b scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65929216 unmapped: 1122304 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 163) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:25:51.869752+0000 osd.2 (osd.2) 162 : cluster [DBG] 8.1b scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:25:51.883829+0000 osd.2 (osd.2) 163 : cluster [DBG] 8.1b scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:23.425820+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 807891 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65929216 unmapped: 1122304 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:24.425973+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65937408 unmapped: 1114112 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:25.426146+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:25:54.934391+0000 osd.2 (osd.2) 164 : cluster [DBG] 11.12 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:25:54.948463+0000 osd.2 (osd.2) 165 : cluster [DBG] 11.12 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65937408 unmapped: 1114112 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 165) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:25:54.934391+0000 osd.2 (osd.2) 164 : cluster [DBG] 11.12 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:25:54.948463+0000 osd.2 (osd.2) 165 : cluster [DBG] 11.12 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:26.426339+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65945600 unmapped: 1105920 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:27.426485+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65945600 unmapped: 1105920 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:28.710732+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 167 sent 165 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:25:57.972041+0000 osd.2 (osd.2) 166 : cluster [DBG] 11.1f scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:25:57.986018+0000 osd.2 (osd.2) 167 : cluster [DBG] 11.1f scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810189 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65945600 unmapped: 1105920 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 167) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:25:57.972041+0000 osd.2 (osd.2) 166 : cluster [DBG] 11.1f scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:25:57.986018+0000 osd.2 (osd.2) 167 : cluster [DBG] 11.1f scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:29.710965+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 1097728 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:30.711084+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:26:00.032162+0000 osd.2 (osd.2) 168 : cluster [DBG] 11.1a scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:26:00.045703+0000 osd.2 (osd.2) 169 : cluster [DBG] 11.1a scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65961984 unmapped: 1089536 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 169) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:26:00.032162+0000 osd.2 (osd.2) 168 : cluster [DBG] 11.1a scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:26:00.045703+0000 osd.2 (osd.2) 169 : cluster [DBG] 11.1a scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:31.711252+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65961984 unmapped: 1089536 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:32.711394+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65970176 unmapped: 1081344 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:33.711585+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 811338 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65970176 unmapped: 1081344 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:34.711707+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.b scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.079380035s of 13.108389854s, submitted: 8
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.b scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65978368 unmapped: 1073152 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:35.711794+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 171 sent 169 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:26:04.978136+0000 osd.2 (osd.2) 170 : cluster [DBG] 11.b scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:26:04.992242+0000 osd.2 (osd.2) 171 : cluster [DBG] 11.b scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65978368 unmapped: 1073152 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 171) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:26:04.978136+0000 osd.2 (osd.2) 170 : cluster [DBG] 11.b scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:26:04.992242+0000 osd.2 (osd.2) 171 : cluster [DBG] 11.b scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:36.712065+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65978368 unmapped: 1073152 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:37.712187+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65986560 unmapped: 1064960 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:38.712338+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:26:08.017843+0000 osd.2 (osd.2) 172 : cluster [DBG] 8.1c scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:26:08.031995+0000 osd.2 (osd.2) 173 : cluster [DBG] 8.1c scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 813634 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 1056768 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 173) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:26:08.017843+0000 osd.2 (osd.2) 172 : cluster [DBG] 8.1c scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:26:08.031995+0000 osd.2 (osd.2) 173 : cluster [DBG] 8.1c scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:39.712536+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 1048576 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:40.712662+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 1056768 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:41.712801+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 1056768 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:42.712970+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 1048576 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:43.713145+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:26:12.937601+0000 osd.2 (osd.2) 174 : cluster [DBG] 8.11 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:26:12.951659+0000 osd.2 (osd.2) 175 : cluster [DBG] 8.11 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.e scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.e scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 815929 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 1048576 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 175) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:26:12.937601+0000 osd.2 (osd.2) 174 : cluster [DBG] 8.11 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:26:12.951659+0000 osd.2 (osd.2) 175 : cluster [DBG] 8.11 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:44.713310+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:26:13.896692+0000 osd.2 (osd.2) 176 : cluster [DBG] 9.e scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:26:13.928403+0000 osd.2 (osd.2) 177 : cluster [DBG] 9.e scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 1040384 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 177) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:26:13.896692+0000 osd.2 (osd.2) 176 : cluster [DBG] 9.e scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:26:13.928403+0000 osd.2 (osd.2) 177 : cluster [DBG] 9.e scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:45.713468+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:26:14.917944+0000 osd.2 (osd.2) 178 : cluster [DBG] 9.6 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:26:14.953093+0000 osd.2 (osd.2) 179 : cluster [DBG] 9.6 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.708541870s of 10.978280067s, submitted: 10
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 1024000 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 179) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:26:14.917944+0000 osd.2 (osd.2) 178 : cluster [DBG] 9.6 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:26:14.953093+0000 osd.2 (osd.2) 179 : cluster [DBG] 9.6 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:46.713672+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:26:15.956484+0000 osd.2 (osd.2) 180 : cluster [DBG] 9.17 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:26:15.981284+0000 osd.2 (osd.2) 181 : cluster [DBG] 9.17 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.f scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.f scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 1015808 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 181) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:26:15.956484+0000 osd.2 (osd.2) 180 : cluster [DBG] 9.17 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:26:15.981284+0000 osd.2 (osd.2) 181 : cluster [DBG] 9.17 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:47.713835+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:26:16.923329+0000 osd.2 (osd.2) 182 : cluster [DBG] 9.f scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:26:16.962129+0000 osd.2 (osd.2) 183 : cluster [DBG] 9.f scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.7 deep-scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.7 deep-scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66043904 unmapped: 1007616 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 183) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:26:16.923329+0000 osd.2 (osd.2) 182 : cluster [DBG] 9.f scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:26:16.962129+0000 osd.2 (osd.2) 183 : cluster [DBG] 9.f scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:48.714002+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:26:17.971047+0000 osd.2 (osd.2) 184 : cluster [DBG] 9.7 deep-scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:26:18.002816+0000 osd.2 (osd.2) 185 : cluster [DBG] 9.7 deep-scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 820518 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66043904 unmapped: 1007616 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 185) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:26:17.971047+0000 osd.2 (osd.2) 184 : cluster [DBG] 9.7 deep-scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:26:18.002816+0000 osd.2 (osd.2) 185 : cluster [DBG] 9.7 deep-scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:49.714163+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:26:18.983595+0000 osd.2 (osd.2) 186 : cluster [DBG] 9.8 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:26:19.022449+0000 osd.2 (osd.2) 187 : cluster [DBG] 9.8 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66043904 unmapped: 1007616 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 187) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:26:18.983595+0000 osd.2 (osd.2) 186 : cluster [DBG] 9.8 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:26:19.022449+0000 osd.2 (osd.2) 187 : cluster [DBG] 9.8 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:50.714344+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 999424 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:51.714458+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 999424 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:52.714609+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 991232 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:53.714730+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:26:22.992875+0000 osd.2 (osd.2) 188 : cluster [DBG] 9.18 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:26:23.021130+0000 osd.2 (osd.2) 189 : cluster [DBG] 9.18 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 822813 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 991232 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 189) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:26:22.992875+0000 osd.2 (osd.2) 188 : cluster [DBG] 9.18 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:26:23.021130+0000 osd.2 (osd.2) 189 : cluster [DBG] 9.18 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:54.714954+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 991232 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:55.715082+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66068480 unmapped: 983040 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:56.715207+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66068480 unmapped: 983040 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:57.715989+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 974848 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:58.716117+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 822813 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 974848 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:59.716263+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 974848 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:00.716418+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.c deep-scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.923019409s of 14.958429337s, submitted: 10
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.c deep-scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66084864 unmapped: 966656 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:01.716539+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:26:30.914951+0000 osd.2 (osd.2) 190 : cluster [DBG] 9.c deep-scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:26:30.946683+0000 osd.2 (osd.2) 191 : cluster [DBG] 9.c deep-scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 191) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:26:30.914951+0000 osd.2 (osd.2) 190 : cluster [DBG] 9.c deep-scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:26:30.946683+0000 osd.2 (osd.2) 191 : cluster [DBG] 9.c deep-scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66093056 unmapped: 958464 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:02.716692+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 193 sent 191 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:26:31.933870+0000 osd.2 (osd.2) 192 : cluster [DBG] 9.13 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:26:31.965710+0000 osd.2 (osd.2) 193 : cluster [DBG] 9.13 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 193) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:26:31.933870+0000 osd.2 (osd.2) 192 : cluster [DBG] 9.13 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:26:31.965710+0000 osd.2 (osd.2) 193 : cluster [DBG] 9.13 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66101248 unmapped: 950272 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:03.716970+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 825108 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66101248 unmapped: 950272 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:04.717076+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66101248 unmapped: 950272 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:05.717204+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  log_queue is 2 last_log 195 sent 193 num 2 unsent 2 sending 2
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:26:34.926888+0000 osd.2 (osd.2) 194 : cluster [DBG] 9.19 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  will send 2025-11-24T18:26:34.966011+0000 osd.2 (osd.2) 195 : cluster [DBG] 9.19 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client handle_log_ack log(last 195) v1
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:26:34.926888+0000 osd.2 (osd.2) 194 : cluster [DBG] 9.19 scrub starts
Nov 24 18:51:49 compute-0 ceph-osd[90655]: log_client  logged 2025-11-24T18:26:34.966011+0000 osd.2 (osd.2) 195 : cluster [DBG] 9.19 scrub ok
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66109440 unmapped: 942080 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:06.717358+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66109440 unmapped: 942080 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:07.717467+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66109440 unmapped: 942080 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:08.717702+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66117632 unmapped: 933888 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:09.717847+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66117632 unmapped: 933888 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:10.717984+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66125824 unmapped: 925696 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:11.718110+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66134016 unmapped: 917504 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:12.718264+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66134016 unmapped: 917504 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:13.718381+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66142208 unmapped: 909312 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:14.718631+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66142208 unmapped: 909312 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:15.718763+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 901120 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:16.718975+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 901120 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:17.719101+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 901120 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:18.719296+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 892928 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:19.719401+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 892928 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:20.719516+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 884736 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:21.719820+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 884736 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:22.720023+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 884736 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:23.720157+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 884736 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:24.720282+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 876544 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:25.720392+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 876544 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:26.720495+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 868352 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:27.720598+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 868352 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:28.720814+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 860160 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:29.720955+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 860160 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:30.721069+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 860160 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:31.721207+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 851968 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:32.721350+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 851968 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:33.721457+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 851968 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:34.721636+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 843776 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:35.721770+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 843776 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:36.721927+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 843776 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:37.722049+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 835584 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:38.722203+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 835584 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:39.722322+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 835584 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:40.722487+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66224128 unmapped: 827392 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:41.722652+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66224128 unmapped: 827392 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:42.723024+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66232320 unmapped: 819200 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:43.723170+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66240512 unmapped: 811008 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:44.723328+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 802816 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:45.723450+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 802816 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:46.723566+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 802816 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:47.723741+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66256896 unmapped: 794624 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:48.723854+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66256896 unmapped: 794624 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:49.723999+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66256896 unmapped: 794624 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:50.724122+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 786432 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:51.724286+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 786432 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:52.724681+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 786432 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:53.724834+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 778240 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:54.724999+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 778240 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:55.725165+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66281472 unmapped: 770048 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:56.725324+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66281472 unmapped: 770048 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:57.725432+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66281472 unmapped: 770048 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:58.725560+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 761856 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:59.725658+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 761856 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:00.725771+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 761856 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:01.725920+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 761856 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:02.726067+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 761856 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:03.726165+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 753664 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:04.726265+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 761856 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:05.726343+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 761856 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:06.726450+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 753664 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:07.726578+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 753664 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:08.726726+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 745472 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:09.726830+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 745472 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:10.727137+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 745472 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:11.727262+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 737280 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:12.727403+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 737280 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:13.727537+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 729088 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:14.727665+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 729088 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:15.727778+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 729088 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:16.727947+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 729088 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:17.728080+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 720896 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:18.728235+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 720896 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:19.728375+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 712704 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:20.728489+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 712704 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:21.728640+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 712704 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:22.728792+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 704512 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:23.728991+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 704512 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:24.729119+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 696320 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:25.729242+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 696320 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:26.729359+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 696320 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:27.729483+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66363392 unmapped: 688128 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:28.729634+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66363392 unmapped: 688128 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:29.729845+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 679936 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:30.729990+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 679936 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:31.730137+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 679936 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:32.730296+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 671744 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:33.730410+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 671744 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:34.730525+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 655360 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:35.730652+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 655360 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:36.730776+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 655360 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:37.730961+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66404352 unmapped: 647168 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:38.731162+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66404352 unmapped: 647168 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:39.731285+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 638976 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:40.731408+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 638976 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:41.731558+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 638976 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:42.731722+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 630784 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:43.731885+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 630784 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:44.732071+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 630784 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:45.732201+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 622592 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:46.732327+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 622592 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:47.732489+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 630784 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:48.732591+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 622592 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:49.732745+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 614400 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:50.732990+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 606208 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:51.733120+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 606208 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:52.733328+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 606208 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:53.733449+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 598016 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:54.733645+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 598016 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:55.733821+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 589824 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:56.733946+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 589824 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:57.734194+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 589824 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:58.734316+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 581632 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:59.734451+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 581632 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:00.734571+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 581632 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:01.734705+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 573440 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:02.734950+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 573440 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:03.735097+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 565248 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:04.735411+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 557056 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:05.735608+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 557056 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:06.735928+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 548864 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:07.736209+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 548864 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:08.736594+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:09.738048+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 540672 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:10.738383+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 540672 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:11.739312+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 540672 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:12.739601+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 532480 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:13.739775+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 532480 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:14.740073+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 532480 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:15.740214+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 532480 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:16.740395+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 524288 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:17.740606+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 524288 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:18.740734+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 516096 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:19.740891+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 516096 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:20.741109+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 507904 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:21.741273+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 507904 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:22.741472+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 507904 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:23.741620+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 499712 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:24.741814+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 499712 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:25.742034+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 499712 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:26.742197+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 491520 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:27.742367+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 491520 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:28.742498+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 483328 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:29.742624+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 483328 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:30.742813+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 475136 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:31.742961+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 475136 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:32.743158+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 475136 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:33.743294+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 466944 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:34.743484+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 466944 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:35.743642+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 466944 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:36.743850+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 466944 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:37.743971+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 458752 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:38.744103+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 458752 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:39.744239+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 450560 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:40.744361+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 450560 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:41.744500+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 442368 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:42.744657+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 442368 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:43.744791+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 442368 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:44.744930+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 434176 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:45.745035+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 434176 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:46.745168+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 425984 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:47.745312+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 425984 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:48.745598+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 417792 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:49.745726+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 409600 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:50.745871+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 409600 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:51.746031+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 409600 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:52.746217+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 401408 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:53.746333+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 401408 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:54.746442+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 401408 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:55.746573+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 393216 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:56.746701+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 385024 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:57.746814+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 385024 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:58.746972+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 376832 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:59.747157+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 376832 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:00.747443+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 376832 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:01.747542+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 368640 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:02.747713+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 368640 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:03.747886+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 360448 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:04.748048+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 360448 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:05.748213+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 352256 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:06.748336+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 344064 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:07.748478+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 344064 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:08.748652+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 335872 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:09.748928+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 335872 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:10.749142+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 335872 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:11.749303+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 327680 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:12.749824+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 327680 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:13.750006+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 319488 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:14.750186+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 319488 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:15.750348+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 319488 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:16.750469+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 311296 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:17.750763+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 311296 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:18.751045+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 303104 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:19.751221+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 303104 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:20.751366+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 303104 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:21.751598+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 294912 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:22.751755+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 294912 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:23.751964+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 294912 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:24.752106+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 286720 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:25.752276+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66772992 unmapped: 278528 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:26.752443+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 270336 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:27.752635+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 270336 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:28.752968+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 270336 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:29.753102+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 262144 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:30.753222+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 262144 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:31.753343+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 262144 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:32.753503+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 253952 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:33.753654+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 253952 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:34.753781+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 253952 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:35.753927+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 245760 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:36.754056+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 245760 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:37.754176+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 245760 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:38.754301+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 237568 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:39.754468+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 237568 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:40.754614+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 229376 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:41.754764+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 229376 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:42.755041+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 221184 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:43.755264+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 221184 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:44.755507+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 221184 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:45.755734+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 212992 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:46.755967+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 212992 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:47.756196+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 212992 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:48.756332+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 204800 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:49.756453+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 204800 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:50.756614+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 204800 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:51.756765+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 196608 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:52.756924+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 196608 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:53.757071+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 188416 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:54.757204+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 188416 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:55.757341+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 180224 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:56.757458+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 180224 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:57.757589+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 180224 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:58.757747+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 172032 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:59.757868+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 172032 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:00.757993+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 172032 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:01.758139+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 163840 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:02.758309+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 163840 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:03.758434+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 155648 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:04.758606+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 155648 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:05.758775+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 155648 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:06.758939+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 147456 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:07.759100+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 147456 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:08.759224+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 131072 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:09.765489+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 131072 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:10.765643+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 131072 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:11.765794+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 131072 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:12.765969+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 122880 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:13.766393+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 131072 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:14.767357+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 122880 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:15.768018+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 122880 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:16.768272+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 122880 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:17.768998+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Cumulative writes: 5482 writes, 23K keys, 5482 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5482 writes, 769 syncs, 7.13 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5482 writes, 23K keys, 5482 commit groups, 1.0 writes per commit group, ingest: 18.33 MB, 0.03 MB/s
                                           Interval WAL: 5482 writes, 769 syncs, 7.13 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.03              0.00         1    0.027       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92a430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92a430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92a430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.019       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 57344 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:18.769370+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 57344 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:19.769490+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 49152 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:20.769714+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 49152 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:21.770049+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 49152 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:22.770357+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 40960 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:23.770628+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 40960 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:24.770851+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 40960 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:25.771042+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 32768 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:26.771192+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 32768 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:27.771331+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 24576 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:28.771465+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 24576 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:29.771602+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 24576 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:30.771747+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 16384 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:31.771881+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 16384 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:32.772321+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 16384 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:33.772493+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 8192 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:34.772747+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 8192 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:35.772954+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 8192 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:36.773170+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 0 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:37.773299+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 0 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:38.773501+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 1040384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:39.773624+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 1040384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:40.773765+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 1040384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:41.773912+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 1032192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:42.774045+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 1032192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:43.774165+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 1024000 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:44.774303+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 1024000 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:45.774428+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1015808 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:46.774547+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1015808 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:47.774672+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1015808 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:48.774804+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 1007616 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:49.774964+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 1007616 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:50.775099+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 999424 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:51.775166+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 999424 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:52.775244+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 999424 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:53.775379+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 999424 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:54.775563+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 991232 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:55.775711+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 991232 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:56.775841+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 983040 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:57.775973+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 983040 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:58.776158+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 983040 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:59.776298+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 974848 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:00.776406+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 974848 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:01.776537+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 966656 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:02.776734+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 966656 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:03.776932+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 966656 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:04.777058+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 950272 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:05.777127+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 950272 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:06.777258+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 950272 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:07.777388+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 933888 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:08.777512+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 933888 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:09.777678+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 925696 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:10.777790+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 925696 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:11.777980+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 925696 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:12.778179+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 917504 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:13.778322+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 917504 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:14.778464+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 917504 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:15.778797+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 909312 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:16.778974+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 909312 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:17.779087+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 909312 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:18.779196+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 901120 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:19.779280+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 901120 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:20.779467+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 901120 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:21.779682+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 892928 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:22.779888+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 892928 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:23.780064+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 892928 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:24.780200+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 884736 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:25.780322+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 884736 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:26.780471+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 876544 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:27.780588+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 876544 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:28.780709+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 876544 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:29.780846+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 328.941497803s of 328.962036133s, submitted: 6
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 499712 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:30.780958+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 270336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:31.781087+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 270336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:32.781268+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 270336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:33.781392+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 270336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:34.781502+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 270336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:35.781612+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67837952 unmapped: 262144 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:36.781736+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67837952 unmapped: 262144 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:37.781847+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67837952 unmapped: 262144 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:38.781978+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:39.782086+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:40.782200+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 245760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:41.782320+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 245760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:42.782449+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67862528 unmapped: 237568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:43.782586+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67862528 unmapped: 237568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:44.782740+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67862528 unmapped: 237568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:45.782870+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 229376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:46.782996+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 229376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:47.783114+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 221184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:48.783231+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 221184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:49.783346+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 221184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:50.783495+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 212992 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:51.783648+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 212992 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:52.783811+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 204800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:53.783994+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 204800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:54.784144+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 204800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:55.784274+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 196608 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:56.784464+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 196608 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:57.784623+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 188416 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:58.784778+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 188416 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:59.784978+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 180224 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:00.785153+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 180224 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:01.785282+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 172032 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:02.785465+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 172032 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:03.785580+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 163840 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:04.785710+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 147456 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:05.785820+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 147456 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:06.785947+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 139264 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:07.786068+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 139264 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:08.786193+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 139264 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:09.786325+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 122880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:10.786456+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 122880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:11.786593+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 114688 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:12.786766+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 114688 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:13.786909+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 106496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:14.787055+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 106496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:15.787190+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 106496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:16.787307+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 106496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:17.787415+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:18.787570+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:19.787693+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:20.787879+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:21.788084+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:22.788275+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:23.788431+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:24.788609+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:25.788868+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:26.789068+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:27.789201+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:28.789316+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:29.789470+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:30.789610+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:31.789744+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:32.789960+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:33.790102+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:34.790265+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 90112 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:35.790523+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:36.790727+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:37.790937+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:38.791123+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:39.791251+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:40.791383+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:41.791541+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:42.791706+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:43.791836+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:44.792029+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:45.792167+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:46.792281+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:47.792392+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:48.792558+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:49.792723+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:50.792888+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:51.793046+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:52.793206+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:53.793437+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 73728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:54.793592+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 73728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:55.793737+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 73728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:56.793890+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 73728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:57.794064+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:58.794298+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:59.794403+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:00.794499+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:01.794598+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:02.794786+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:03.794940+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:04.795054+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:05.795187+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:06.795360+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:07.795541+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:08.795658+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:09.795822+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:10.795994+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:11.796211+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:12.796401+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:13.796523+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:14.796646+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:15.796820+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:16.797014+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:17.797130+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:18.797309+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:19.797443+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:20.797583+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:21.797720+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:22.797935+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:23.798081+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:24.798210+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:25.798326+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:26.798434+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:27.798573+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:28.798713+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:29.798836+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:30.798976+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:31.799155+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:32.799339+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:33.799454+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:34.799654+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:35.799792+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:36.799907+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:37.800008+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:38.800146+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:39.800274+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:40.800505+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:41.800609+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:42.800921+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:43.801098+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:44.801262+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:45.801432+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:46.801680+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:47.801801+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:48.801931+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:49.802067+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:50.802204+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:51.802328+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:52.802488+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:53.802641+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:54.802811+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:55.802999+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:56.803163+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:57.803282+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:58.803439+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:59.803596+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:00.803758+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 40960 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:01.803928+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 40960 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:02.804111+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 40960 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:03.804248+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 40960 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:04.804428+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:05.804591+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:06.804738+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:07.804876+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:08.805045+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:09.805167+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:10.805311+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:11.805431+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:12.805557+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:13.805742+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:14.805928+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:15.806126+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:16.806326+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:17.806453+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:18.806608+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:19.806798+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:20.806988+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:21.807177+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:22.807383+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:23.807523+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:24.807712+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:25.807853+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:26.808009+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:27.808118+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:28.808262+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:29.808421+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:30.808544+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:31.808996+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:32.809157+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:33.809280+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:34.809392+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:35.809535+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:36.809654+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:37.809763+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:38.809880+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:39.810031+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:40.810147+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:41.810253+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:42.810380+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:43.810494+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:44.810639+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:45.810783+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:46.811112+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:47.811247+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:48.811367+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:49.811529+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:50.811639+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:51.811783+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:52.812005+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:53.812163+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:54.812295+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:55.812421+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:56.812581+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:57.812710+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:58.812822+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:59.812950+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:00.813087+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:01.813205+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:02.813844+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:03.814007+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:04.814160+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 1089536 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:05.814286+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:06.814441+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:07.814555+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:08.814870+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:09.815076+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:10.815219+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:11.815343+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:12.815485+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:13.815632+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:14.815761+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:15.815882+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:16.816019+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:17.816144+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:18.816255+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:19.816369+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:20.816474+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:21.816809+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:22.816981+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:23.817182+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:24.817324+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:25.817454+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:26.817579+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:27.817707+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:28.817819+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:29.817956+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:30.818066+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:31.818195+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:32.818340+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:33.818471+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:34.818613+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:35.818733+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:36.818931+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:37.819081+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:38.819198+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:39.819315+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:40.819575+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:41.819727+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:42.819980+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:43.820098+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:44.820230+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:45.820378+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:46.820557+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:47.820700+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:48.820875+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:49.821048+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:50.821184+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:51.821356+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:52.821523+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:53.821633+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:54.821770+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14805 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:55.821963+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:56.822097+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:57.822227+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:58.822381+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:59.822559+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:00.822802+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:01.822994+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:02.823162+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:03.823293+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:04.823450+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:05.823561+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:06.823672+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:07.823795+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:08.823942+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:09.824097+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:10.824245+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:11.824358+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:12.824513+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:13.824660+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:14.824791+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:15.824924+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:16.825071+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:17.825190+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:18.825316+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:19.825451+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:20.825567+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:21.825712+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:22.825856+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:23.825990+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:24.826126+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:25.826258+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:26.826370+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:27.826498+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:28.826622+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:29.826735+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:30.826861+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:31.827013+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:32.827202+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:33.827408+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:34.827581+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:35.827710+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:36.827863+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:37.828030+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:38.828153+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:39.828262+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:40.828412+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:41.828554+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:42.828720+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:43.828866+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:44.829304+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:45.829428+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:46.829548+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:47.829672+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:48.829804+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:49.829946+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:50.830086+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:51.830210+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:52.830364+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:53.830473+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:54.830725+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:55.830891+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:56.831067+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:57.831256+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:58.831380+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:59.831498+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:00.831605+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:01.831754+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:02.831914+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:03.832059+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:04.832216+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:05.832347+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:06.832615+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:07.832823+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:08.833129+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:09.833366+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:10.833630+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:11.833754+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:12.833914+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:13.834031+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:14.834153+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:15.834282+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:16.834395+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:17.834579+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:18.834694+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:19.834962+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:20.835122+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:21.835285+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:22.835436+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:23.835603+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:24.835767+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:25.835933+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:26.836073+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:27.836185+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:28.836369+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:29.836571+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:30.836752+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:31.836919+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:32.837211+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:33.837431+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:34.837595+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:35.837928+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:36.838053+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:37.838239+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:38.838432+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:39.838598+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:40.838799+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:41.838957+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:42.839097+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:43.839216+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:44.839341+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:45.839500+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:46.839646+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:47.839787+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:48.839965+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:49.840168+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:50.840433+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:51.840561+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:52.840719+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:53.840969+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:54.841392+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:55.841798+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:56.841944+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:57.842079+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:58.842195+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:59.842321+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:00.842448+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:01.842602+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:02.842754+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:03.842942+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:04.843078+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 983040 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:05.843220+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 983040 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:06.843358+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 983040 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:07.843559+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 983040 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:08.843742+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 983040 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:09.844003+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:10.844194+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:11.844317+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:12.844548+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:13.844665+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:14.844880+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:15.845058+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:16.845180+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:17.845306+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:18.845430+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:19.845557+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:20.845695+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:21.845833+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:22.845992+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:23.846111+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:24.846230+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:25.846359+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:26.846460+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:27.846590+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:28.846699+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:29.846926+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:30.847029+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:31.847127+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:32.847271+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:33.849351+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:34.849487+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:35.849605+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:36.849715+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:37.849869+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:38.850062+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:39.850176+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:40.850273+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:41.850422+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:42.850591+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:43.850726+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:44.850859+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:45.850948+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:46.851066+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:47.851228+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:48.851754+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:49.851985+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:50.852104+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:51.852635+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:52.852811+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:53.853082+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:54.863078+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:55.863394+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:56.863554+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:57.863690+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:58.863981+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:59.864095+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:00.864263+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:01.864407+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:02.864578+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:03.864704+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:04.864826+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:05.864963+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:06.865102+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:07.865276+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:08.865438+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:09.865565+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:10.865736+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:11.865929+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:12.866152+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:13.866303+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:14.866462+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:15.866619+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:16.866747+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:17.866882+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:18.867090+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:19.867239+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:20.867385+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:21.867582+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:22.867790+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:23.867920+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:24.868033+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:25.868142+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:26.868257+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:27.868373+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:28.868490+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:29.868647+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:30.868780+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:31.868971+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:32.869191+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:33.869327+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:34.869481+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:35.869681+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:36.869958+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:37.870176+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:38.870348+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:39.870524+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:40.870650+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:41.870788+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:42.871038+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:43.871210+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:44.871362+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:45.871510+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:46.871652+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:47.871923+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:48.872080+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:49.872183+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:50.872299+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:51.872427+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:52.872641+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:53.872766+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:54.872912+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:55.873058+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:56.873176+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:57.873358+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:58.873511+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:59.873629+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:00.873811+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:01.873928+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:02.874076+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:03.874214+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:04.874336+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:05.874481+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:06.874637+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:07.874788+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:08.874967+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:09.875129+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:10.875338+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:11.875461+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:12.875605+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:13.875731+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:14.875890+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:15.876093+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:16.876228+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:17.876351+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Cumulative writes: 5662 writes, 23K keys, 5662 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5662 writes, 859 syncs, 6.59 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.03              0.00         1    0.027       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92a430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92a430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92a430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.019       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:18.876492+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:19.876654+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:20.876772+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:21.876936+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:22.877114+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:23.877218+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:24.877332+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:25.877495+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:26.877623+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:27.877772+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:28.877889+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:29.878039+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:30.878156+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:31.878326+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:32.878545+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:33.878635+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:34.878749+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:35.878878+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:36.878949+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:37.879060+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:38.879181+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:39.879291+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:40.879406+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:41.879526+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:42.879662+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:43.879783+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:44.879953+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:45.880066+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:46.880193+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:47.880305+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:48.880467+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:49.880704+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:50.880830+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:51.880956+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:52.881518+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:53.881631+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 884736 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:54.881770+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 884736 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:55.881946+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:56.882071+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:57.882248+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:58.882375+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:59.882499+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:00.882640+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:01.882776+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:02.883024+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:03.883139+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:04.883255+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:05.883368+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:06.883479+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:07.883625+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:08.883789+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:09.883930+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:10.884049+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:11.884169+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:12.884304+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:13.884438+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:14.884607+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:15.884751+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:16.884870+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:17.885010+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:18.885119+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:19.885231+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 851968 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:20.885363+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:21.885500+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:22.885745+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:23.885878+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:24.886012+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:25.886128+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:26.886386+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:27.886503+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:28.886624+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 884736 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:29.886994+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 599.769165039s of 600.112915039s, submitted: 90
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 1744896 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:30.887112+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:31.887406+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:32.887545+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:33.887718+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:34.887873+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:35.887980+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:36.888151+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:37.888279+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:38.888423+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:39.888684+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:40.888833+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:41.888957+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:42.889163+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:43.889359+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:44.889488+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:45.889623+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:46.889769+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:47.889963+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:48.890096+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:49.890235+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:50.890333+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:51.890447+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:52.890598+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:53.890756+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:54.890948+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:55.891065+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:56.891193+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:57.891370+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:58.891511+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:59.891665+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:00.891841+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:01.892631+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:02.893417+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:03.893647+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:04.893778+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68640768 unmapped: 1556480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:05.893934+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68640768 unmapped: 1556480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:06.894051+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68640768 unmapped: 1556480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:07.894160+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68640768 unmapped: 1556480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:08.894288+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68640768 unmapped: 1556480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:09.894445+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:10.894568+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:11.894719+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:12.894873+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:13.895001+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:14.895127+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:15.895232+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:16.895345+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:17.895477+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:18.895593+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:19.895703+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:20.895852+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:21.896003+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:22.896213+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:23.896368+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:24.896494+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:25.896612+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:26.896749+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:27.896953+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:28.897271+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:29.897441+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:30.897578+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:31.897750+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:32.897961+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:33.898127+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:34.898244+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:35.898364+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:36.898487+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:37.898614+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:38.898754+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:39.898988+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:40.899095+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:41.899203+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:42.899359+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:43.899512+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:44.899633+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:45.899802+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:46.899943+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:49 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:49 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:47.900084+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:48.900230+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:49.900446+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:49 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:50.900675+0000)
Nov 24 18:51:49 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:51.900825+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:52.900976+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:53.901093+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:54.901227+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:55.901353+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:56.901481+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:57.901592+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:58.901731+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:59.901850+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:00.901976+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:01.902101+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:02.902290+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:03.902446+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:04.902589+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:05.902722+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:06.902961+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:07.903217+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:08.903417+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:09.903642+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:10.903833+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:11.904017+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:12.904239+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:13.904413+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:14.904571+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:15.904794+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:16.905009+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:17.905153+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:18.905311+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:19.905555+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:20.905747+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:21.905940+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:22.906107+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:23.906293+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:24.906424+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:25.906581+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:26.906707+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:27.906854+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:28.906971+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:29.907116+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:30.907270+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:31.907415+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:32.907592+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:33.907706+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:34.907842+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:35.908003+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:36.908154+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:37.908280+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:38.908448+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:39.908771+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:40.909136+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:41.909400+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:42.909713+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:43.909997+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:44.910293+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:45.910531+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:46.910745+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:47.910986+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:48.911249+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:49.911478+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:50.911718+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:51.911952+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:52.912203+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:53.912399+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:54.912594+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:55.912808+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:56.912975+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:57.913096+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:58.913226+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:59.913377+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:00.913535+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:01.913740+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:02.913932+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:03.914074+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:04.914226+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:05.914431+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:06.914609+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:07.914735+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:08.914852+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:09.914991+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:10.915153+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:11.915277+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:12.915479+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:13.915661+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:14.915836+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:15.915986+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:16.916204+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:17.916342+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:18.916515+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:19.916652+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:20.916777+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:21.916960+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:22.917133+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:23.917255+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:24.917413+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:25.917562+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:26.917695+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:27.917833+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:28.917982+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:29.918162+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:30.918302+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:31.918555+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:32.918756+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:33.919050+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:34.919305+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:35.919510+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:36.919777+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:37.919965+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:38.920102+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:39.920249+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:40.920386+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:41.920545+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:42.920781+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:43.920959+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:44.921277+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:45.921844+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:46.923084+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:47.923428+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:48.923567+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:49.924462+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:50.924984+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:51.925223+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:52.925823+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:53.926137+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:54.926719+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:55.926977+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:56.927136+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:57.927267+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:58.927429+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:59.927591+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:00.927840+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:01.928083+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:02.928245+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:03.928362+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:04.928493+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:05.928691+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:06.928840+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:07.928945+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:08.929056+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:09.929185+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:10.929412+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:11.929536+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:12.929684+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:13.929864+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:14.930029+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:15.930256+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:16.930417+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:17.930541+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:18.930658+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:19.930723+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:20.930843+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:21.930960+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:22.931121+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:23.931258+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:24.931467+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:25.931634+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:26.931793+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:27.931940+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:28.932089+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:29.932221+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:30.932370+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:31.932471+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:32.932630+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:33.932711+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:34.932852+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:35.933034+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:36.933216+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:37.933353+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:38.933525+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:39.933677+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:40.933827+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:41.934105+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:42.934306+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:43.934451+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:44.934651+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:45.934778+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:46.934978+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:47.935121+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:48.935259+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:49.935389+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:50.935563+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:51.935695+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:52.936016+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:53.936153+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:54.936346+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:55.936506+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:56.936637+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:57.936765+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:58.936945+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:59.937351+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:00.937507+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:01.937670+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:02.937873+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:03.938022+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:04.938269+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:05.938415+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:06.938545+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:07.938705+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:08.938867+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:09.938983+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:10.939146+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:11.939312+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:12.939474+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:13.939597+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:14.939723+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:15.939956+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:16.940098+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:17.940255+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:18.940378+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:19.940502+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:20.941691+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:21.942659+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:22.943466+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:23.943990+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:24.944304+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:25.944496+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:26.947277+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:27.949110+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:28.951614+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:29.952322+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:30.952477+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:31.952839+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:32.952998+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:33.953874+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:34.954441+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:35.954644+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:36.954949+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:37.955141+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:38.955448+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:39.955624+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:40.955861+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:41.956060+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:42.956348+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:43.956548+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:44.956752+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:45.956940+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:46.957184+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:47.957363+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:48.957530+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:49.957672+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:50.957849+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:51.958062+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:52.958253+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:53.958415+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:54.958571+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:55.958677+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:56.958817+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:57.959021+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:58.959199+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:59.959349+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:00.959470+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:01.959603+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:02.959773+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:03.959976+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:04.960129+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:05.960271+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:06.960456+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:07.960657+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:08.960813+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:09.960987+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:10.961101+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:11.961260+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:12.961493+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:13.961618+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:14.961754+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:15.961961+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:16.962096+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:17.962222+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:18.962355+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:19.962582+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:20.962724+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:21.962952+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:22.963161+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:23.963267+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:24.963418+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:25.963595+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:26.964624+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:27.964791+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:28.965704+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:29.965864+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:30.966709+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:31.967353+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:32.967518+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:33.967782+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:34.967935+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:35.968326+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:36.968890+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:37.969181+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:38.969522+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:39.969719+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:40.969834+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:41.969961+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:42.970231+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861424400
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:43.970422+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 1425408 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:44.970582+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 1425408 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 120 handle_osd_map epochs [121,122], i have 120, src has [1,122]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 375.062835693s of 375.391784668s, submitted: 90
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:45.970791+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 9699328 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab4000/0x0/0x4ffc00000, data 0xaf0c9/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:46.971042+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70025216 unmapped: 16957440 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 123 ms_handle_reset con 0x556861424400 session 0x556861a3e000
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:47.971306+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70041600 unmapped: 16941056 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 123 heartbeat osd_stat(store_statfs(0x4fbab4000/0x0/0x4ffc00000, data 0x10af0c9/0x1169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949338 data_alloc: 218103808 data_used: 221184
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 123 heartbeat osd_stat(store_statfs(0x4fbab0000/0x0/0x4ffc00000, data 0x10b0c85/0x116d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fa43c00
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:48.971426+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70172672 unmapped: 16809984 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 123 handle_osd_map epochs [123,124], i have 123, src has [1,124]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 124 handle_osd_map epochs [124,124], i have 124, src has [1,124]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 124 ms_handle_reset con 0x55685fa43c00 session 0x556861a3e1e0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:49.971635+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 16613376 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:50.971847+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fbaaa000/0x0/0x4ffc00000, data 0x10b2851/0x1172000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:51.971976+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 124 handle_osd_map epochs [124,125], i have 124, src has [1,125]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:52.972268+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959866 data_alloc: 218103808 data_used: 221184
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:53.972449+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:54.972706+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbaa8000/0x0/0x4ffc00000, data 0x10b42b4/0x1175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:55.972999+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:56.973180+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:57.973442+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959866 data_alloc: 218103808 data_used: 221184
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbaa8000/0x0/0x4ffc00000, data 0x10b42b4/0x1175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:58.973567+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:59.973699+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:00.973870+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:01.974013+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbaa8000/0x0/0x4ffc00000, data 0x10b42b4/0x1175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:02.974172+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbaa8000/0x0/0x4ffc00000, data 0x10b42b4/0x1175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959866 data_alloc: 218103808 data_used: 221184
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:03.974301+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:04.974463+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbaa8000/0x0/0x4ffc00000, data 0x10b42b4/0x1175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:05.974588+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:06.974722+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbaa8000/0x0/0x4ffc00000, data 0x10b42b4/0x1175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:07.974842+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960026 data_alloc: 218103808 data_used: 225280
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:08.974987+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:09.975133+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:10.975293+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:11.975432+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbaa8000/0x0/0x4ffc00000, data 0x10b42b4/0x1175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:12.975666+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960026 data_alloc: 218103808 data_used: 225280
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbaa8000/0x0/0x4ffc00000, data 0x10b42b4/0x1175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:13.975824+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:14.975984+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:15.976144+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:16.976317+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:17.976496+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960026 data_alloc: 218103808 data_used: 225280
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:18.976668+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbaa8000/0x0/0x4ffc00000, data 0x10b42b4/0x1175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:19.976831+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:20.977016+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685ecad000
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 35.260654449s of 35.765483856s, submitted: 60
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:21.977183+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70451200 unmapped: 16531456 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 126 ms_handle_reset con 0x55685ecad000 session 0x556861bab0e0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fbaa8000/0x0/0x4ffc00000, data 0x10b42d7/0x1176000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:22.977388+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fbaa4000/0x0/0x4ffc00000, data 0x10b5e54/0x1179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70451200 unmapped: 16531456 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fa43c00
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964309 data_alloc: 218103808 data_used: 233472
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:23.977533+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70483968 unmapped: 16498688 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 127 ms_handle_reset con 0x55685fa43c00 session 0x556861bab680
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:24.977718+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70565888 unmapped: 16416768 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:25.977986+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70565888 unmapped: 16416768 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fee0c00
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:26.978146+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 71614464 unmapped: 15368192 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 128 ms_handle_reset con 0x55685fee0c00 session 0x55685ff2cf00
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:27.978319+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 15376384 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971712 data_alloc: 218103808 data_used: 249856
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fba9e000/0x0/0x4ffc00000, data 0x10b999b/0x117f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:28.978485+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 15376384 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fee1000
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:29.978598+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 15056896 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861424400
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:30.979173+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 22200320 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.523596764s of 10.044019699s, submitted: 88
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 129 ms_handle_reset con 0x556861424400 session 0x556861a3fc20
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685ecad400
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:31.979354+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fee1c00
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 21004288 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 129 handle_osd_map epochs [129,130], i have 129, src has [1,130]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 130 ms_handle_reset con 0x55685fee1000 session 0x55685f1890e0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 130 ms_handle_reset con 0x55685ecad400 session 0x55685ff31e00
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685ecad400
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 131 ms_handle_reset con 0x55685fee1c00 session 0x556861b321e0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 131 ms_handle_reset con 0x55685ecad400 session 0x556861b42960
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:32.980188+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 131 heartbeat osd_stat(store_statfs(0x4f8a8c000/0x0/0x4ffc00000, data 0x40bfbe2/0x4190000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 20930560 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fa43c00
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fee0c00
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fee1000
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334958 data_alloc: 218103808 data_used: 266240
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 131 ms_handle_reset con 0x55685fee1000 session 0x556861b33860
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861424400
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 131 ms_handle_reset con 0x556861424400 session 0x556861b42780
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:33.980606+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 19914752 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 132 ms_handle_reset con 0x55685fee0c00 session 0x55685f0854a0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685ecad400
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 132 ms_handle_reset con 0x55685fa43c00 session 0x556861b42f00
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fee1000
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f8a8a000/0x0/0x4ffc00000, data 0x40bfc15/0x4192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:34.980842+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fee1c00
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 19832832 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 133 ms_handle_reset con 0x55685fee1000 session 0x556861b5f4a0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 133 ms_handle_reset con 0x55685ecad400 session 0x55685f048960
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 133 ms_handle_reset con 0x55685fee1c00 session 0x556861b43860
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861424400
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 133 ms_handle_reset con 0x556861424400 session 0x556861b5f680
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685ecad400
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fa43c00
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 133 ms_handle_reset con 0x55685fa43c00 session 0x55685f049c20
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:35.981155+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 18759680 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 134 ms_handle_reset con 0x55685ecad400 session 0x556861b5fa40
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fee1000
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:36.981621+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fba7d000/0x0/0x4ffc00000, data 0x10c5d75/0x119e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 18751488 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 135 ms_handle_reset con 0x55685fee1000 session 0x55685ff2cf00
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:37.981769+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fee1c00
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 18718720 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 136 ms_handle_reset con 0x55685fee1c00 session 0x55685f0854a0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1030796 data_alloc: 218103808 data_used: 266240
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:38.982125+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861424800
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 18628608 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x5568610d1000
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:39.982243+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 18595840 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 137 ms_handle_reset con 0x556861424800 session 0x556861b74f00
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 137 ms_handle_reset con 0x5568610d1000 session 0x55685e9225a0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:40.982565+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 17547264 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685ecad000
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fba77000/0x0/0x4ffc00000, data 0x10cafaa/0x11a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:41.982814+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 17547264 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.963214874s of 11.130927086s, submitted: 311
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 138 ms_handle_reset con 0x55685ecad000 session 0x556861b8c780
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:42.983241+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685ecad400
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 17514496 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fba73000/0x0/0x4ffc00000, data 0x10cda61/0x11a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1042431 data_alloc: 218103808 data_used: 278528
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:43.983355+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 140 ms_handle_reset con 0x55685ecad400 session 0x556861b8cf00
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 17448960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fa42800
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:44.983510+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 17448960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 141 ms_handle_reset con 0x55685fa42800 session 0x556861b8dc20
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fa43c00
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 141 ms_handle_reset con 0x55685fa43c00 session 0x55686331a1e0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685ecad000
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:45.983641+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685ecad400
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 17309696 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fba73000/0x0/0x4ffc00000, data 0x10d035e/0x11aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 142 ms_handle_reset con 0x55685ecad400 session 0x556861a32b40
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fa42800
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:46.983827+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 17170432 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 142 handle_osd_map epochs [142,143], i have 142, src has [1,143]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x5568610d1000
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 143 ms_handle_reset con 0x5568610d1000 session 0x556861b321e0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 143 ms_handle_reset con 0x55685ecad000 session 0x55686331a780
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fee1c00
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 143 ms_handle_reset con 0x55685fee1c00 session 0x55686331af00
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 143 ms_handle_reset con 0x55685fa42800 session 0x556860d0c1e0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:47.984243+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 17203200 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053039 data_alloc: 218103808 data_used: 290816
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:48.984448+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 17170432 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fba6b000/0x0/0x4ffc00000, data 0x10d590a/0x11b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:49.984643+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 17170432 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:50.984839+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 17162240 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:51.984985+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 17162240 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:52.985140+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 17162240 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053039 data_alloc: 218103808 data_used: 290816
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fba6b000/0x0/0x4ffc00000, data 0x10d590a/0x11b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 144 handle_osd_map epochs [145,145], i have 145, src has [1,145]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.896071434s of 11.615738869s, submitted: 215
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:53.985306+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 17145856 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:54.985534+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 17145856 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:55.985727+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 17145856 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fa42800
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:56.985880+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 17145856 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 145 handle_osd_map epochs [145,146], i have 145, src has [1,146]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 145 handle_osd_map epochs [146,146], i have 146, src has [1,146]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:57.986082+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 17137664 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 147 ms_handle_reset con 0x55685fa42800 session 0x55686331b680
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063545 data_alloc: 218103808 data_used: 290816
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:58.986240+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685ecad000
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685ecad400
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 17080320 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fba62000/0x0/0x4ffc00000, data 0x10dab66/0x11ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:59.986401+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 17096704 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:00.986520+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 17096704 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fee1000
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:01.986659+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861a5e800
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 17096704 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 148 handle_osd_map epochs [148,148], i have 148, src has [1,148]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 148 ms_handle_reset con 0x556861a5e800 session 0x55686331bc20
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 148 ms_handle_reset con 0x55685fee1000 session 0x556861a3e960
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 148 handle_osd_map epochs [148,149], i have 148, src has [1,149]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:02.987270+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861a5e400
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78364672 unmapped: 17014784 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 149 ms_handle_reset con 0x556861a5e400 session 0x556861a32b40
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1070073 data_alloc: 218103808 data_used: 307200
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fba5c000/0x0/0x4ffc00000, data 0x10de2b4/0x11c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:03.987971+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fc06c00
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.863252640s of 10.083137512s, submitted: 82
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 149 ms_handle_reset con 0x55685fc06c00 session 0x556863375c20
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fa42800
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 16809984 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 149 handle_osd_map epochs [149,150], i have 149, src has [1,150]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 149 handle_osd_map epochs [150,150], i have 150, src has [1,150]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 150 ms_handle_reset con 0x55685fa42800 session 0x55686331ad20
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:04.988436+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fee1000
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 16809984 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:05.988718+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78577664 unmapped: 16801792 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 151 ms_handle_reset con 0x55685fee1000 session 0x5568633743c0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861a5e400
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:06.988884+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 16793600 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fba56000/0x0/0x4ffc00000, data 0x10e1a95/0x11c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 152 ms_handle_reset con 0x556861a5e400 session 0x55686331a000
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:07.989069+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 16793600 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fba51000/0x0/0x4ffc00000, data 0x10e366d/0x11cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082688 data_alloc: 218103808 data_used: 307200
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:08.989292+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 16793600 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:09.989657+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 16793600 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:10.990114+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 16793600 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:11.990330+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 16793600 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861a5e800
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 152 ms_handle_reset con 0x556861a5e800 session 0x5568633743c0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861ae1000
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861ae1400
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 153 ms_handle_reset con 0x556861ae1400 session 0x5568633a6000
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 153 ms_handle_reset con 0x556861ae1000 session 0x55686331b680
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:12.990478+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fa42800
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 153 ms_handle_reset con 0x55685fa42800 session 0x55686331ad20
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fee1c00
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 153 ms_handle_reset con 0x55685fee1c00 session 0x556861a3e960
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fee1000
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 153 ms_handle_reset con 0x55685fee1000 session 0x556860d0c1e0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861a5fc00
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 153 ms_handle_reset con 0x556861a5fc00 session 0x5568632121e0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fba4e000/0x0/0x4ffc00000, data 0x10e512b/0x11cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 16744448 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fba4e000/0x0/0x4ffc00000, data 0x10e512b/0x11cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086691 data_alloc: 218103808 data_used: 315392
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:13.990695+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 16744448 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fa42800
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.245035172s of 10.595973969s, submitted: 74
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 153 ms_handle_reset con 0x55685fa42800 session 0x5568632125a0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fee1000
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fee1c00
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:14.990828+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78938112 unmapped: 16441344 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:15.990980+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78938112 unmapped: 16441344 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:16.991116+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861ae1000
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 153 ms_handle_reset con 0x556861ae1000 session 0x55685f04a960
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861a5e800
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78938112 unmapped: 16441344 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fba2b000/0x0/0x4ffc00000, data 0x110914a/0x11f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 153 handle_osd_map epochs [153,154], i have 153, src has [1,154]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:17.991241+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 154 ms_handle_reset con 0x556861a5e800 session 0x55685f250780
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861a5e400
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861a5d000
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 154 ms_handle_reset con 0x556861a5d000 session 0x5568632130e0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 154 ms_handle_reset con 0x556861a5e400 session 0x556861b752c0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fa42800
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 154 ms_handle_reset con 0x55685fa42800 session 0x556861b42d20
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861a5d000
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 16400384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094730 data_alloc: 218103808 data_used: 331776
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:18.991437+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 155 ms_handle_reset con 0x556861a5d000 session 0x556861528000
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 16400384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 155 handle_osd_map epochs [155,156], i have 155, src has [1,156]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:19.991568+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861a5e800
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 156 ms_handle_reset con 0x556861a5e800 session 0x556861bab860
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 16400384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861ae1000
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 156 ms_handle_reset con 0x556861ae1000 session 0x556861b8c3c0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861a5d400
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:20.991681+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 156 ms_handle_reset con 0x556861a5d400 session 0x556861b8c1e0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861a5d400
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 156 ms_handle_reset con 0x556861a5d400 session 0x556861ab4f00
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 16400384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:21.991838+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 16400384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fa42800
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fba1e000/0x0/0x4ffc00000, data 0x110e473/0x11fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 156 ms_handle_reset con 0x55685fa42800 session 0x556861ab4d20
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:22.991977+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861a5d000
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 16515072 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100812 data_alloc: 218103808 data_used: 335872
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:23.992123+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78880768 unmapped: 16498688 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 157 ms_handle_reset con 0x556861a5d000 session 0x5568615290e0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:24.992295+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78880768 unmapped: 16498688 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 157 ms_handle_reset con 0x55685fee1000 session 0x556863212f00
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.930842400s of 11.172493935s, submitted: 53
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 157 ms_handle_reset con 0x55685fee1c00 session 0x5568633a61e0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fa42800
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:25.992431+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fba1e000/0x0/0x4ffc00000, data 0x111001e/0x11ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [0,0,1])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78888960 unmapped: 16490496 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:26.992658+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 158 ms_handle_reset con 0x55685fa42800 session 0x5568632132c0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78905344 unmapped: 16474112 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:27.992842+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78905344 unmapped: 16474112 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1106820 data_alloc: 218103808 data_used: 327680
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:28.993075+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 16457728 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 159 heartbeat osd_stat(store_statfs(0x4fba3d000/0x0/0x4ffc00000, data 0x10ef625/0x11df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:29.993248+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 16457728 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 159 ms_handle_reset con 0x55685ecad000 session 0x55686331ba40
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 159 ms_handle_reset con 0x55685ecad400 session 0x556861bab0e0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:30.993713+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fee1000
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 159 ms_handle_reset con 0x55685fee1000 session 0x5568633a6780
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 16457728 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:31.993977+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fee1c00
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 16457728 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 159 ms_handle_reset con 0x55685fee1c00 session 0x5568633a6b40
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685ecad000
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 159 handle_osd_map epochs [159,160], i have 159, src has [1,160]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:32.994277+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 160 ms_handle_reset con 0x55685ecad000 session 0x5568633a72c0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1108734 data_alloc: 218103808 data_used: 331776
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:33.994529+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fba3b000/0x0/0x4ffc00000, data 0x10f122e/0x11e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:34.994696+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:35.994829+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:36.994950+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:37.995079+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1108734 data_alloc: 218103808 data_used: 331776
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:38.995202+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fba3b000/0x0/0x4ffc00000, data 0x10f122e/0x11e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.445519447s of 13.752939224s, submitted: 101
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:39.995379+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:40.995517+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:41.995687+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:42.995849+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:43.995982+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:44.996119+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:45.996422+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:46.996733+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:47.996973+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:48.997170+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:49.997340+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:50.997530+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:51.997720+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:52.998006+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:53.998164+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:54.998293+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:55.998481+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:56.998746+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:57.999015+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:58.999156+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:59.999303+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:00.999439+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:01.999620+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:02.999830+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:03.999971+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:05.000135+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:06.000307+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:07.000446+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:08.000558+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:09.000720+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:10.000971+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:11.001172+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:12.001345+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:13.001516+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:14.001649+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:15.001808+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:16.002193+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:17.002378+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.2 total, 600.0 interval
                                           Cumulative writes: 7658 writes, 29K keys, 7658 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7658 writes, 1723 syncs, 4.44 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1996 writes, 5287 keys, 1996 commit groups, 1.0 writes per commit group, ingest: 2.75 MB, 0.00 MB/s
                                           Interval WAL: 1996 writes, 864 syncs, 2.31 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:18.002542+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:19.002675+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:20.002822+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:21.002992+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:22.003143+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:23.003305+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: mgrc ms_handle_reset ms_handle_reset con 0x55685f53c000
Nov 24 18:51:50 compute-0 ceph-osd[90655]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/536471675
Nov 24 18:51:50 compute-0 ceph-osd[90655]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/536471675,v1:192.168.122.100:6801/536471675]
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: get_auth_request con 0x556861ae1400 auth_method 0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: mgrc handle_mgr_configure stats_period=5
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:24.003478+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:25.003623+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:26.003812+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:27.003948+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:28.004086+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:29.004203+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:30.004370+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:31.004502+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:32.004629+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:33.004808+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:34.004965+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:35.005085+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:36.005220+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:37.005311+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:38.005444+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:39.005594+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:40.005718+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:41.005857+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:42.006007+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:43.006173+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:44.006322+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:45.006495+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:46.006627+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:47.006748+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:48.006891+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:49.007048+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:50.007189+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:51.007325+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:52.007501+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:53.007677+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:54.007823+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:55.007960+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:56.008127+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:57.008337+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:58.008497+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:59.008666+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:00.008786+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:01.008916+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:02.009030+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:03.009168+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:04.009301+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:05.009433+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:06.009661+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:07.009821+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:08.010012+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:09.010156+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:10.010268+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:11.010395+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:12.010521+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:13.010711+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:14.010865+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:15.011181+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:16.011296+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 16146432 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:17.011413+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: do_command 'config diff' '{prefix=config diff}'
Nov 24 18:51:50 compute-0 ceph-osd[90655]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: do_command 'config show' '{prefix=config show}'
Nov 24 18:51:50 compute-0 ceph-osd[90655]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 24 18:51:50 compute-0 ceph-osd[90655]: do_command 'counter dump' '{prefix=counter dump}'
Nov 24 18:51:50 compute-0 ceph-osd[90655]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79536128 unmapped: 15843328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: do_command 'counter schema' '{prefix=counter schema}'
Nov 24 18:51:50 compute-0 ceph-osd[90655]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:18.011542+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 15564800 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:51:50 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:19.011685+0000)
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:50 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:50 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:51:50 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 15515648 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:50 compute-0 ceph-osd[90655]: do_command 'log dump' '{prefix=log dump}'
Nov 24 18:51:50 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 24 18:51:50 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3579092568' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 18:51:50 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14809 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:51:50 compute-0 ceph-mon[74927]: from='client.14793 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:51:50 compute-0 ceph-mon[74927]: from='client.14797 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:51:50 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1646305784' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 18:51:50 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3579092568' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 18:51:50 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 24 18:51:50 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2871650015' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 18:51:50 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14813 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:51:50 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 24 18:51:50 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3361796308' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 18:51:51 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14817 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:51:51 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 24 18:51:51 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2544646524' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 24 18:51:51 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14821 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:51:51 compute-0 ceph-mon[74927]: from='client.14801 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:51:51 compute-0 ceph-mon[74927]: pgmap v1119: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:51 compute-0 ceph-mon[74927]: from='client.14805 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:51:51 compute-0 ceph-mon[74927]: from='client.14809 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:51:51 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2871650015' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 18:51:51 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3361796308' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 18:51:51 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2544646524' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 24 18:51:51 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Nov 24 18:51:51 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:51:51.553767) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 18:51:51 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Nov 24 18:51:51 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010311553800, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2311, "num_deletes": 271, "total_data_size": 3545995, "memory_usage": 3612936, "flush_reason": "Manual Compaction"}
Nov 24 18:51:51 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Nov 24 18:51:51 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010311570944, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3449696, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20959, "largest_seqno": 23269, "table_properties": {"data_size": 3438945, "index_size": 6989, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22643, "raw_average_key_size": 21, "raw_value_size": 3417313, "raw_average_value_size": 3205, "num_data_blocks": 310, "num_entries": 1066, "num_filter_entries": 1066, "num_deletions": 271, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764010125, "oldest_key_time": 1764010125, "file_creation_time": 1764010311, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Nov 24 18:51:51 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 17210 microseconds, and 8086 cpu microseconds.
Nov 24 18:51:51 compute-0 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 18:51:51 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:51:51.570978) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3449696 bytes OK
Nov 24 18:51:51 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:51:51.570993) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Nov 24 18:51:51 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:51:51.572960) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Nov 24 18:51:51 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:51:51.572974) EVENT_LOG_v1 {"time_micros": 1764010311572970, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 18:51:51 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:51:51.572989) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 18:51:51 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3536003, prev total WAL file size 3536003, number of live WAL files 2.
Nov 24 18:51:51 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:51:51 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:51:51.573769) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Nov 24 18:51:51 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 18:51:51 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3368KB)], [50(7327KB)]
Nov 24 18:51:51 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010311573795, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10953450, "oldest_snapshot_seqno": -1}
Nov 24 18:51:51 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4851 keys, 9181861 bytes, temperature: kUnknown
Nov 24 18:51:51 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010311631757, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9181861, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9146435, "index_size": 22196, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12165, "raw_key_size": 119155, "raw_average_key_size": 24, "raw_value_size": 9055691, "raw_average_value_size": 1866, "num_data_blocks": 930, "num_entries": 4851, "num_filter_entries": 4851, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008324, "oldest_key_time": 0, "file_creation_time": 1764010311, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Nov 24 18:51:51 compute-0 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 18:51:51 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:51:51.631977) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9181861 bytes
Nov 24 18:51:51 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:51:51.633447) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 188.8 rd, 158.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.2 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(5.8) write-amplify(2.7) OK, records in: 5389, records dropped: 538 output_compression: NoCompression
Nov 24 18:51:51 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:51:51.633462) EVENT_LOG_v1 {"time_micros": 1764010311633455, "job": 26, "event": "compaction_finished", "compaction_time_micros": 58021, "compaction_time_cpu_micros": 18676, "output_level": 6, "num_output_files": 1, "total_output_size": 9181861, "num_input_records": 5389, "num_output_records": 4851, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 18:51:51 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:51:51 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010311634018, "job": 26, "event": "table_file_deletion", "file_number": 52}
Nov 24 18:51:51 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:51:51 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010311635249, "job": 26, "event": "table_file_deletion", "file_number": 50}
Nov 24 18:51:51 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:51:51.573691) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:51:51 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:51:51.635285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:51:51 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:51:51.635290) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:51:51 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:51:51.635292) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:51:51 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:51:51.635293) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:51:51 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:51:51.635294) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:51:51 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1120: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:52 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14827 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:51:52 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:51:52.104+0000 7f6377bb5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 24 18:51:52 compute-0 ceph-mgr[75218]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 24 18:51:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 24 18:51:52 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4249350633' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 24 18:51:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 24 18:51:52 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3376743541' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 24 18:51:52 compute-0 ceph-mon[74927]: from='client.14813 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:51:52 compute-0 ceph-mon[74927]: from='client.14817 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:51:52 compute-0 ceph-mon[74927]: from='client.14821 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:51:52 compute-0 ceph-mon[74927]: pgmap v1120: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:52 compute-0 ceph-mon[74927]: from='client.14827 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:51:52 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/4249350633' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 24 18:51:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 24 18:51:52 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/876718826' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 24 18:51:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:51:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 24 18:51:52 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2467813662' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 24 18:51:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 24 18:51:52 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1751375874' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 24 18:51:53 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 24 18:51:53 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/349103013' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 24 18:51:53 compute-0 crontab[285155]: (root) LIST (root)
Nov 24 18:51:53 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 24 18:51:53 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2330414028' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 24 18:51:53 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3376743541' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 24 18:51:53 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/876718826' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 24 18:51:53 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2467813662' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 24 18:51:53 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1751375874' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 24 18:51:53 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/349103013' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 24 18:51:53 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2330414028' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 24 18:51:53 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Nov 24 18:51:53 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3021928101' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 24 18:51:53 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1121: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:53 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Nov 24 18:51:53 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1007814410' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 24 18:51:53 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Nov 24 18:51:53 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/647054694' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[59,73)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.994158745s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084793091s@ mbc={}] exit Reset 0.000144 1 0.000322
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.994158745s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084793091s@ mbc={}] enter Started
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.994158745s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084793091s@ mbc={}] enter Start
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.994112968s) [2] async=[2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 166.084762573s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.994158745s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084793091s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.994158745s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084793091s@ mbc={}] exit Start 0.000010 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.994158745s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084793091s@ mbc={}] enter Started/Stray
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.994071960s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084762573s@ mbc={}] exit Reset 0.000066 1 0.000104
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.994071960s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084762573s@ mbc={}] enter Started
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.994071960s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084762573s@ mbc={}] enter Start
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.994071960s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084762573s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.994071960s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084762573s@ mbc={}] exit Start 0.000009 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.994071960s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084762573s@ mbc={}] enter Started/Stray
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.993534088s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084701538s@ mbc={}] exit Reset 0.000797 1 0.000867
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.993534088s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084701538s@ mbc={}] enter Started
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.993534088s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084701538s@ mbc={}] enter Start
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.993534088s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084701538s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.993534088s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084701538s@ mbc={}] exit Start 0.000022 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.993534088s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084701538s@ mbc={}] enter Started/Stray
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=59) [1] r=0 lpr=59 crt=55'385 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 27.474400 46 0.000155
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=59) [1] r=0 lpr=59 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 27.485983 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=59) [1] r=0 lpr=59 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 27.486060 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=59) [1] r=0 lpr=59 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 27.486097 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=59) [1] r=0 lpr=59 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=12.525858879s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 163.617202759s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=12.525828362s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.617202759s@ mbc={}] exit Reset 0.000062 1 0.000097
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=12.525828362s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.617202759s@ mbc={}] enter Started
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=12.525828362s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.617202759s@ mbc={}] enter Start
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=12.525828362s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.617202759s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=12.525828362s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.617202759s@ mbc={}] exit Start 0.000015 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=12.525828362s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.617202759s@ mbc={}] enter Started/Stray
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=59) [1] r=0 lpr=59 crt=55'385 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 27.472724 46 0.000152
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=59) [1] r=0 lpr=59 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 27.484947 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=59) [1] r=0 lpr=59 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 27.484996 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=59) [1] r=0 lpr=59 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 27.485018 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=59) [1] r=0 lpr=59 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=12.525444031s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 163.617691040s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=12.525407791s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.617691040s@ mbc={}] exit Reset 0.000057 1 0.000629
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=12.525407791s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.617691040s@ mbc={}] enter Started
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=12.525407791s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.617691040s@ mbc={}] enter Start
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=12.525407791s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.617691040s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=12.525407791s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.617691040s@ mbc={}] exit Start 0.000009 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=12.525407791s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.617691040s@ mbc={}] enter Started/Stray
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70541312 unmapped: 786432 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:42.443969+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 75 handle_osd_map epochs [76,76], i have 75, src has [1,76]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.015181 3 0.000191
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.015266 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.014440 3 0.000049
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.014494 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000063 1 0.000095
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000006 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000027 1 0.000038
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000068 1 0.000110
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000007 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000022 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000034 1 0.000041
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000023 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.024054 7 0.000130
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000090 1 0.000035
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029625 7 0.000181
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.028996 7 0.000130
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029605 7 0.000371
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000128 1 0.000076
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000185 1 0.000083
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000261 1 0.000074
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.e( v 55'385 (0'0,55'385] lb MIN local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 DELETING pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.066580 2 0.000205
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.e( v 55'385 (0'0,55'385] lb MIN local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.066724 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.e( v 55'385 (0'0,55'385] lb MIN local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.090823 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.1e( v 55'385 (0'0,55'385] lb MIN local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 DELETING pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.098167 2 0.000170
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.1e( v 55'385 (0'0,55'385] lb MIN local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.098342 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.1e( v 55'385 (0'0,55'385] lb MIN local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.128032 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.16( v 55'385 (0'0,55'385] lb MIN local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 DELETING pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.127666 2 0.000141
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.16( v 55'385 (0'0,55'385] lb MIN local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.127897 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.16( v 55'385 (0'0,55'385] lb MIN local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.156965 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.6( v 55'385 (0'0,55'385] lb MIN local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 DELETING pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.172048 2 0.000192
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.6( v 55'385 (0'0,55'385] lb MIN local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.172349 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.6( v 55'385 (0'0,55'385] lb MIN local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.202012 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70598656 unmapped: 729088 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:43.444131+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 76 handle_osd_map epochs [76,77], i have 76, src has [1,77]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 76 handle_osd_map epochs [76,77], i have 77, src has [1,77]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003459 4 0.000063
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.003585 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003547 4 0.000064
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.003689 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 77 handle_osd_map epochs [77,77], i have 77, src has [1,77]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 77 handle_osd_map epochs [77,77], i have 77, src has [1,77]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.006830 5 0.000656
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.006733 5 0.000516
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000114 1 0.000056
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000400 1 0.000080
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.061615 2 0.000042
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.062176 1 0.000040
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000546 1 0.000040
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.034886 2 0.000041
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70320128 unmapped: 1007616 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:44.444347+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 77 handle_osd_map epochs [78,78], i have 77, src has [1,78]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.953204 1 0.000094
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.022519 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.026130 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.026164 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78 pruub=14.984064102s) [2] async=[2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 169.117111206s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78 pruub=14.983811378s) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 169.117111206s@ mbc={}] exit Reset 0.000325 1 0.000405
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78 pruub=14.983811378s) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 169.117111206s@ mbc={}] enter Started
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78 pruub=14.983811378s) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 169.117111206s@ mbc={}] enter Start
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78 pruub=14.983811378s) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 169.117111206s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.918314 1 0.000076
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.022942 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.026674 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.026700 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78 pruub=14.983452797s) [2] async=[2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 169.117080688s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78 pruub=14.983811378s) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 169.117111206s@ mbc={}] exit Start 0.000208 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78 pruub=14.983373642s) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 169.117080688s@ mbc={}] exit Reset 0.000125 1 0.000179
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78 pruub=14.983373642s) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 169.117080688s@ mbc={}] enter Started
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78 pruub=14.983373642s) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 169.117080688s@ mbc={}] enter Start
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78 pruub=14.983373642s) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 169.117080688s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78 pruub=14.983373642s) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 169.117080688s@ mbc={}] exit Start 0.000011 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78 pruub=14.983373642s) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 169.117080688s@ mbc={}] enter Started/Stray
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78 pruub=14.983811378s) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 169.117111206s@ mbc={}] enter Started/Stray
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 78 handle_osd_map epochs [78,78], i have 78, src has [1,78]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 78 handle_osd_map epochs [78,78], i have 78, src has [1,78]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70516736 unmapped: 811008 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:45.444964+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 78 handle_osd_map epochs [78,79], i have 78, src has [1,79]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 79 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.025568 7 0.000453
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 79 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 79 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 79 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000164 1 0.000080
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 79 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 79 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029545 7 0.000673
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 79 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 79 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 79 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000143 1 0.000057
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 79 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 79 pg[9.18( v 55'385 (0'0,55'385] lb MIN local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=-1 lpr=78 DELETING pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.041707 2 0.000250
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 79 pg[9.18( v 55'385 (0'0,55'385] lb MIN local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.041948 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 79 pg[9.18( v 55'385 (0'0,55'385] lb MIN local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.067589 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 79 pg[9.8( v 55'385 (0'0,55'385] lb MIN local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=-1 lpr=78 DELETING pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.089708 2 0.000171
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 79 pg[9.8( v 55'385 (0'0,55'385] lb MIN local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.089925 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 79 pg[9.8( v 55'385 (0'0,55'385] lb MIN local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.119900 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 79 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xd427d/0x151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 645863 data_alloc: 218103808 data_used: 98304
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 704512 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:46.445121+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:23:16.382457+0000 osd.1 (osd.1) 70 : cluster [DBG] 6.1 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:23:16.396509+0000 osd.1 (osd.1) 71 : cluster [DBG] 6.1 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 71) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:23:16.382457+0000 osd.1 (osd.1) 70 : cluster [DBG] 6.1 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:23:16.396509+0000 osd.1 (osd.1) 71 : cluster [DBG] 6.1 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 704512 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:47.445333+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 704512 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:48.445590+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 79 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xd5bd4/0x152000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 79 handle_osd_map epochs [80,80], i have 79, src has [1,80]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.853742599s of 11.075757980s, submitted: 70
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70664192 unmapped: 663552 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:49.445719+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 655360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:50.445875+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 80 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xd7751/0x155000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 653626 data_alloc: 218103808 data_used: 98304
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 573440 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:51.446030+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:23:21.396327+0000 osd.1 (osd.1) 72 : cluster [DBG] 4.5 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:23:21.410345+0000 osd.1 (osd.1) 73 : cluster [DBG] 4.5 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 73) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:23:21.396327+0000 osd.1 (osd.1) 72 : cluster [DBG] 4.5 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:23:21.410345+0000 osd.1 (osd.1) 73 : cluster [DBG] 4.5 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 540672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:52.446210+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:23:22.401200+0000 osd.1 (osd.1) 74 : cluster [DBG] 4.4 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:23:22.415318+0000 osd.1 (osd.1) 75 : cluster [DBG] 4.4 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 75) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:23:22.401200+0000 osd.1 (osd.1) 74 : cluster [DBG] 4.4 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:23:22.415318+0000 osd.1 (osd.1) 75 : cluster [DBG] 4.4 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 540672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:53.446416+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70795264 unmapped: 532480 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.4 deep-scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.4 deep-scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:54.446632+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:23:24.378615+0000 osd.1 (osd.1) 76 : cluster [DBG] 6.4 deep-scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:23:24.392771+0000 osd.1 (osd.1) 77 : cluster [DBG] 6.4 deep-scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 77) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:23:24.378615+0000 osd.1 (osd.1) 76 : cluster [DBG] 6.4 deep-scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:23:24.392771+0000 osd.1 (osd.1) 77 : cluster [DBG] 6.4 deep-scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 81 handle_osd_map epochs [82,83], i have 81, src has [1,83]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 82 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=59) [1] r=0 lpr=59 crt=55'385 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 40.880109 69 0.000248
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 82 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=59) [1] r=0 lpr=59 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 40.891839 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 82 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=59) [1] r=0 lpr=59 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 40.893175 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 82 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=59) [1] r=0 lpr=59 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 40.893577 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 82 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=59) [1] r=0 lpr=59 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 82 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82 pruub=15.120257378s) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 179.617523193s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 83 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82 pruub=15.120175362s) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.617523193s@ mbc={}] exit Reset 0.000126 2 0.000187
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 83 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82 pruub=15.120175362s) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.617523193s@ mbc={}] enter Started
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 83 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82 pruub=15.120175362s) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.617523193s@ mbc={}] enter Start
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 83 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82 pruub=15.120175362s) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.617523193s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 83 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82 pruub=15.120175362s) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.617523193s@ mbc={}] exit Start 0.000016 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 83 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82 pruub=15.120175362s) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.617523193s@ mbc={}] enter Started/Stray
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 82 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=59) [1] r=0 lpr=59 crt=55'385 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 40.878860 69 0.000889
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 82 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=59) [1] r=0 lpr=59 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 40.891128 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 82 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=59) [1] r=0 lpr=59 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 40.891217 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 82 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=59) [1] r=0 lpr=59 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 40.891288 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 82 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=59) [1] r=0 lpr=59 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 82 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82 pruub=15.119441986s) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 179.617889404s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 83 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82 pruub=15.119354248s) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.617889404s@ mbc={}] exit Reset 0.000138 2 0.000205
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 83 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82 pruub=15.119354248s) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.617889404s@ mbc={}] enter Started
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 83 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82 pruub=15.119354248s) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.617889404s@ mbc={}] enter Start
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 83 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82 pruub=15.119354248s) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.617889404s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 83 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82 pruub=15.119354248s) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.617889404s@ mbc={}] exit Start 0.000022 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 83 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82 pruub=15.119354248s) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.617889404s@ mbc={}] enter Started/Stray
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 83 handle_osd_map epochs [82,83], i have 83, src has [1,83]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70877184 unmapped: 450560 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:55.446837+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 83 handle_osd_map epochs [84,84], i have 83, src has [1,84]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.000339 3 0.000121
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.000457 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.001552 3 0.000118
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.001722 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 24 18:51:54 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000887 1 0.000988
Nov 24 18:51:54 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/533489741' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000806 1 0.001052
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000030 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000090 1 0.000173
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000051 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000012 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000434 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000074 1 0.000768
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000079 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000037 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 666991 data_alloc: 218103808 data_used: 110592
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 442368 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 84 heartbeat osd_stat(store_statfs(0x4fcabf000/0x0/0x4ffc00000, data 0xdc9c8/0x15e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:56.447012+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 84 handle_osd_map epochs [84,85], i have 84, src has [1,85]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 84 handle_osd_map epochs [85,85], i have 85, src has [1,85]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.070412 4 0.000148
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.070660 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.069340 4 0.000779
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.070255 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.048710 5 0.000387
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000341 1 0.000085
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.048960 5 0.000453
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000590 1 0.000033
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.117051 2 0.000069
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.117694 1 0.000049
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000374 1 0.000050
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70918144 unmapped: 409600 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.207841 2 0.000081
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.b deep-scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.b deep-scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:57.447124+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:23:27.405810+0000 osd.1 (osd.1) 78 : cluster [DBG] 6.b deep-scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:23:27.423489+0000 osd.1 (osd.1) 79 : cluster [DBG] 6.b deep-scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 85 handle_osd_map epochs [86,86], i have 85, src has [1,86]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.441883 1 0.000179
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 0.608824 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 1.679505 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 1.679607 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86 pruub=15.439636230s) [2] async=[2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 182.619369507s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86 pruub=15.439560890s) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.619369507s@ mbc={}] exit Reset 0.000146 1 0.000192
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86 pruub=15.439560890s) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.619369507s@ mbc={}] enter Started
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86 pruub=15.439560890s) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.619369507s@ mbc={}] enter Start
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86 pruub=15.439560890s) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.619369507s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86 pruub=15.439560890s) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.619369507s@ mbc={}] exit Start 0.000009 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86 pruub=15.439560890s) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.619369507s@ mbc={}] enter Started/Stray
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.234371 1 0.000081
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 0.609595 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 1.680066 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 1.680558 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86 pruub=15.439293861s) [2] async=[2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 182.619827271s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86 pruub=15.439209938s) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.619827271s@ mbc={}] exit Reset 0.000118 1 0.000205
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86 pruub=15.439209938s) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.619827271s@ mbc={}] enter Started
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86 pruub=15.439209938s) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.619827271s@ mbc={}] enter Start
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86 pruub=15.439209938s) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.619827271s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86 pruub=15.439209938s) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.619827271s@ mbc={}] exit Start 0.000009 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86 pruub=15.439209938s) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.619827271s@ mbc={}] enter Started/Stray
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 86 handle_osd_map epochs [86,86], i have 86, src has [1,86]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 79) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:23:27.405810+0000 osd.1 (osd.1) 78 : cluster [DBG] 6.b deep-scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:23:27.423489+0000 osd.1 (osd.1) 79 : cluster [DBG] 6.b deep-scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 360448 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:58.447343+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:23:28.365719+0000 osd.1 (osd.1) 80 : cluster [DBG] 4.7 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:23:28.379817+0000 osd.1 (osd.1) 81 : cluster [DBG] 4.7 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 81) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:23:28.365719+0000 osd.1 (osd.1) 80 : cluster [DBG] 4.7 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:23:28.379817+0000 osd.1 (osd.1) 81 : cluster [DBG] 4.7 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 360448 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 86 handle_osd_map epochs [87,87], i have 86, src has [1,87]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.282258034s of 10.740533829s, submitted: 33
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 87 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.809903 6 0.000196
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 87 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 87 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 87 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.809542 6 0.000621
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 87 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000443 1 0.000049
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 87 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 87 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: not registered w/ OSD
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 87 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 87 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000681 2 0.000139
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 87 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: not registered w/ OSD
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:59.447507+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:23:29.385472+0000 osd.1 (osd.1) 82 : cluster [DBG] 4.8 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:23:29.399643+0000 osd.1 (osd.1) 83 : cluster [DBG] 4.8 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 87 pg[9.c( v 55'385 (0'0,55'385] lb MIN local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=-1 lpr=86 DELETING pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.060677 3 0.000330
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 87 pg[9.c( v 55'385 (0'0,55'385] lb MIN local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.061191 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 87 pg[9.c( v 55'385 (0'0,55'385] lb MIN local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.871144 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: not registered w/ OSD
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 87 pg[9.1c( v 55'385 (0'0,55'385] lb MIN local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=-1 lpr=86 DELETING pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.111936 2 0.000169
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 87 pg[9.1c( v 55'385 (0'0,55'385] lb MIN local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.112713 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 87 pg[9.1c( v 55'385 (0'0,55'385] lb MIN local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.922324 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: not registered w/ OSD
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 83) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:23:29.385472+0000 osd.1 (osd.1) 82 : cluster [DBG] 4.8 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:23:29.399643+0000 osd.1 (osd.1) 83 : cluster [DBG] 4.8 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 1400832 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 87 heartbeat osd_stat(store_statfs(0x4fcab3000/0x0/0x4ffc00000, data 0xe3438/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:00.447666+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 658095 data_alloc: 218103808 data_used: 118784
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 1400832 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:01.447760+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70983680 unmapped: 1392640 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:02.447881+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70983680 unmapped: 1392640 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.1c scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.1c scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:03.448025+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:23:33.264703+0000 osd.1 (osd.1) 84 : cluster [DBG] 6.1c scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:23:33.281977+0000 osd.1 (osd.1) 85 : cluster [DBG] 6.1c scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 85) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:23:33.264703+0000 osd.1 (osd.1) 84 : cluster [DBG] 6.1c scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:23:33.281977+0000 osd.1 (osd.1) 85 : cluster [DBG] 6.1c scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70983680 unmapped: 1392640 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:04.448176+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:23:34.289202+0000 osd.1 (osd.1) 86 : cluster [DBG] 6.6 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:23:34.306912+0000 osd.1 (osd.1) 87 : cluster [DBG] 6.6 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 87 heartbeat osd_stat(store_statfs(0x4fcab5000/0x0/0x4ffc00000, data 0xe3438/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 87) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:23:34.289202+0000 osd.1 (osd.1) 86 : cluster [DBG] 6.6 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:23:34.306912+0000 osd.1 (osd.1) 87 : cluster [DBG] 6.6 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 1368064 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:05.448323+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 87 heartbeat osd_stat(store_statfs(0x4fcab5000/0x0/0x4ffc00000, data 0xe3438/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 87 handle_osd_map epochs [88,88], i have 87, src has [1,88]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 663010 data_alloc: 218103808 data_used: 126976
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 1368064 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:06.448436+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 1343488 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:07.448557+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 1343488 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.1e scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.1e scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:08.448704+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:23:38.278035+0000 osd.1 (osd.1) 88 : cluster [DBG] 6.1e scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:23:38.292018+0000 osd.1 (osd.1) 89 : cluster [DBG] 6.1e scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 89) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:23:38.278035+0000 osd.1 (osd.1) 88 : cluster [DBG] 6.1e scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:23:38.292018+0000 osd.1 (osd.1) 89 : cluster [DBG] 6.1e scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 1335296 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:09.448872+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 1335296 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 88 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xe4fb5/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 88 handle_osd_map epochs [89,90], i have 89, src has [1,90]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.799038887s of 10.860842705s, submitted: 43
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:10.449002+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 90 handle_osd_map epochs [90,91], i have 90, src has [1,91]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 674738 data_alloc: 218103808 data_used: 135168
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 1310720 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:11.449145+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:23:41.371459+0000 osd.1 (osd.1) 90 : cluster [DBG] 6.1d scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:23:41.389130+0000 osd.1 (osd.1) 91 : cluster [DBG] 6.1d scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 91) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:23:41.371459+0000 osd.1 (osd.1) 90 : cluster [DBG] 6.1d scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:23:41.389130+0000 osd.1 (osd.1) 91 : cluster [DBG] 6.1d scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 1310720 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:12.449359+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 91 heartbeat osd_stat(store_statfs(0x4fcaa6000/0x0/0x4ffc00000, data 0xea22c/0x175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71098368 unmapped: 1277952 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:13.449528+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:23:43.304190+0000 osd.1 (osd.1) 92 : cluster [DBG] 4.9 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:23:43.318312+0000 osd.1 (osd.1) 93 : cluster [DBG] 4.9 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 93) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:23:43.304190+0000 osd.1 (osd.1) 92 : cluster [DBG] 4.9 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:23:43.318312+0000 osd.1 (osd.1) 93 : cluster [DBG] 4.9 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71106560 unmapped: 1269760 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:14.449838+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:23:44.278182+0000 osd.1 (osd.1) 94 : cluster [DBG] 5.18 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:23:44.292306+0000 osd.1 (osd.1) 95 : cluster [DBG] 5.18 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 91 handle_osd_map epochs [92,94], i have 91, src has [1,94]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 95) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:23:44.278182+0000 osd.1 (osd.1) 94 : cluster [DBG] 5.18 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:23:44.292306+0000 osd.1 (osd.1) 95 : cluster [DBG] 5.18 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 1130496 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:15.449996+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 687129 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 1122304 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:16.450109+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 94 heartbeat osd_stat(store_statfs(0x4fca9f000/0x0/0x4ffc00000, data 0xef25a/0x17e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 94 handle_osd_map epochs [95,95], i have 95, src has [1,95]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 1114112 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 95 handle_osd_map epochs [96,96], i have 95, src has [1,96]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:17.450232+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:23:47.277592+0000 osd.1 (osd.1) 96 : cluster [DBG] 5.1d scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:23:47.291673+0000 osd.1 (osd.1) 97 : cluster [DBG] 5.1d scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 97) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:23:47.277592+0000 osd.1 (osd.1) 96 : cluster [DBG] 5.1d scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:23:47.291673+0000 osd.1 (osd.1) 97 : cluster [DBG] 5.1d scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 1114112 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:18.450386+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 1105920 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:19.450514+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 1105920 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.946335793s of 10.017564774s, submitted: 35
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:20.450679+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:23:50.261685+0000 osd.1 (osd.1) 98 : cluster [DBG] 5.1a scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:23:50.275763+0000 osd.1 (osd.1) 99 : cluster [DBG] 5.1a scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693753 data_alloc: 218103808 data_used: 151552
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 99) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:23:50.261685+0000 osd.1 (osd.1) 98 : cluster [DBG] 5.1a scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:23:50.275763+0000 osd.1 (osd.1) 99 : cluster [DBG] 5.1a scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 1105920 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:21.450846+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1097728 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:22.450994+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 96 heartbeat osd_stat(store_statfs(0x4fca9a000/0x0/0x4ffc00000, data 0xf26d5/0x184000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 96 handle_osd_map epochs [97,98], i have 97, src has [1,98]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 98 handle_osd_map epochs [98,99], i have 98, src has [1,99]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 1089536 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 99 pg[9.15(unlocked)] enter Initial
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=0 lpr=0 pi=[67,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000042 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=0 lpr=0 pi=[67,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=0 lpr=99 pi=[67,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000012 1 0.000024
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=0 lpr=99 pi=[67,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=0 lpr=99 pi=[67,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=0 lpr=99 pi=[67,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=0 lpr=99 pi=[67,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=0 lpr=99 pi=[67,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=0 lpr=99 pi=[67,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=0 lpr=99 pi=[67,98)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=0 lpr=99 pi=[67,98)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000119 1 0.000046
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=0 lpr=99 pi=[67,98)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=0 lpr=99 pi=[67,98)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000032 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=0 lpr=99 pi=[67,98)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000167 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=0 lpr=99 pi=[67,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:23.451132+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 1081344 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 99 handle_osd_map epochs [99,100], i have 99, src has [1,100]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 99 handle_osd_map epochs [99,100], i have 100, src has [1,100]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=0 lpr=99 pi=[67,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.007614 2 0.000067
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=0 lpr=99 pi=[67,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.007817 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=0 lpr=99 pi=[67,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.007839 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=0 lpr=99 pi=[67,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000075 1 0.000117
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000005 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 100 handle_osd_map epochs [100,100], i have 100, src has [1,100]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.c deep-scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.c deep-scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:24.451287+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:23:54.268371+0000 osd.1 (osd.1) 100 : cluster [DBG] 5.c deep-scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:23:54.282511+0000 osd.1 (osd.1) 101 : cluster [DBG] 5.c deep-scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 100 heartbeat osd_stat(store_statfs(0x4fca8d000/0x0/0x4ffc00000, data 0xf93eb/0x190000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 1024000 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 101) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:23:54.268371+0000 osd.1 (osd.1) 100 : cluster [DBG] 5.c deep-scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:23:54.282511+0000 osd.1 (osd.1) 101 : cluster [DBG] 5.c deep-scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 100 handle_osd_map epochs [101,101], i have 100, src has [1,101]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 101 pg[9.15( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.023515 5 0.000107
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 101 pg[9.15( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 101 pg[9.15( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: not registered w/ OSD
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 101 pg[9.15( v 55'385 lc 55'152 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.016319 4 0.000127
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 101 pg[9.15( v 55'385 lc 55'152 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 101 pg[9.15( v 55'385 lc 55'152 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000079 1 0.000085
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 101 pg[9.15( v 55'385 lc 55'152 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 101 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.054536 1 0.000102
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 101 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:25.451467+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:23:55.270941+0000 osd.1 (osd.1) 102 : cluster [DBG] 2.9 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:23:55.284205+0000 osd.1 (osd.1) 103 : cluster [DBG] 2.9 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720508 data_alloc: 218103808 data_used: 159744
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 1155072 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 103) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:23:55.270941+0000 osd.1 (osd.1) 102 : cluster [DBG] 2.9 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:23:55.284205+0000 osd.1 (osd.1) 103 : cluster [DBG] 2.9 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.933305 1 0.000033
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.004411 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started 2.027976 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 unknown mbc={}] exit Reset 0.000190 1 0.000282
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Start
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 unknown mbc={}] exit Start 0.000065 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000059 1 0.000255
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: merge_log_dups log.dups.size()=0olog.dups.size()=9
Nov 24 18:51:54 compute-0 ceph-osd[89581]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=9
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001237 3 0.000081
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000023 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:26.451698+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 102 heartbeat osd_stat(store_statfs(0x4fca84000/0x0/0x4ffc00000, data 0xfc92b/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 102 heartbeat osd_stat(store_statfs(0x4fca84000/0x0/0x4ffc00000, data 0xfc92b/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 1105920 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 102 handle_osd_map epochs [102,103], i have 102, src has [1,103]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 102 handle_osd_map epochs [103,103], i have 103, src has [1,103]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 103 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.011152 2 0.000186
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 103 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.012633 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 103 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 103 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=102/103 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.f scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 103 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=102/103 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 103 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=102/103 n=5 ec=59/49 lis/c=102/67 les/c/f=103/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.014761 4 0.000202
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 103 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=102/103 n=5 ec=59/49 lis/c=102/67 les/c/f=103/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 103 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=102/103 n=5 ec=59/49 lis/c=102/67 les/c/f=103/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000019 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 103 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=102/103 n=5 ec=59/49 lis/c=102/67 les/c/f=103/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.f scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:27.451890+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:23:57.301879+0000 osd.1 (osd.1) 104 : cluster [DBG] 5.f scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:23:57.316085+0000 osd.1 (osd.1) 105 : cluster [DBG] 5.f scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 1097728 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 105) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:23:57.301879+0000 osd.1 (osd.1) 104 : cluster [DBG] 5.f scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:23:57.316085+0000 osd.1 (osd.1) 105 : cluster [DBG] 5.f scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:28.452169+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:23:58.298046+0000 osd.1 (osd.1) 106 : cluster [DBG] 2.6 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:23:58.312147+0000 osd.1 (osd.1) 107 : cluster [DBG] 2.6 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 103 heartbeat osd_stat(store_statfs(0x4fca83000/0x0/0x4ffc00000, data 0xfe366/0x19a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 1097728 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 107) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:23:58.298046+0000 osd.1 (osd.1) 106 : cluster [DBG] 2.6 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:23:58.312147+0000 osd.1 (osd.1) 107 : cluster [DBG] 2.6 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:29.452384+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 103 heartbeat osd_stat(store_statfs(0x4fca83000/0x0/0x4ffc00000, data 0xfe366/0x19a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72351744 unmapped: 1073152 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:30.452503+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 727902 data_alloc: 218103808 data_used: 159744
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72351744 unmapped: 1073152 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:31.452641+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 103 heartbeat osd_stat(store_statfs(0x4fca83000/0x0/0x4ffc00000, data 0xfe366/0x19a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.715170860s of 11.885769844s, submitted: 61
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 1064960 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:32.452831+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 1064960 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 104 handle_osd_map epochs [104,105], i have 104, src has [1,105]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.7 deep-scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.7 deep-scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:33.452954+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:24:03.377001+0000 osd.1 (osd.1) 108 : cluster [DBG] 2.7 deep-scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:24:03.391137+0000 osd.1 (osd.1) 109 : cluster [DBG] 2.7 deep-scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72343552 unmapped: 1081344 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 109) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:24:03.377001+0000 osd.1 (osd.1) 108 : cluster [DBG] 2.7 deep-scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:24:03.391137+0000 osd.1 (osd.1) 109 : cluster [DBG] 2.7 deep-scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:34.453168+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:24:04.350516+0000 osd.1 (osd.1) 110 : cluster [DBG] 5.1 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:24:04.364615+0000 osd.1 (osd.1) 111 : cluster [DBG] 5.1 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 105 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0x101a60/0x1a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72343552 unmapped: 1081344 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 105 handle_osd_map epochs [105,106], i have 105, src has [1,106]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 111) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:24:04.350516+0000 osd.1 (osd.1) 110 : cluster [DBG] 5.1 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:24:04.364615+0000 osd.1 (osd.1) 111 : cluster [DBG] 5.1 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:35.453395+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:24:05.380697+0000 osd.1 (osd.1) 112 : cluster [DBG] 2.4 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:24:05.394670+0000 osd.1 (osd.1) 113 : cluster [DBG] 2.4 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fca79000/0x0/0x4ffc00000, data 0x1035dd/0x1a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 741097 data_alloc: 218103808 data_used: 163840
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 1064960 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 113) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:24:05.380697+0000 osd.1 (osd.1) 112 : cluster [DBG] 2.4 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:24:05.394670+0000 osd.1 (osd.1) 113 : cluster [DBG] 2.4 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 106 handle_osd_map epochs [106,107], i have 106, src has [1,107]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:36.453612+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72368128 unmapped: 1056768 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:37.453748+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:24:07.377719+0000 osd.1 (osd.1) 114 : cluster [DBG] 2.3 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:24:07.391972+0000 osd.1 (osd.1) 115 : cluster [DBG] 2.3 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 107 heartbeat osd_stat(store_statfs(0x4fca75000/0x0/0x4ffc00000, data 0x105042/0x1a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 107 handle_osd_map epochs [108,108], i have 107, src has [1,108]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 107 handle_osd_map epochs [108,108], i have 108, src has [1,108]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 1048576 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 115) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:24:07.377719+0000 osd.1 (osd.1) 114 : cluster [DBG] 2.3 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:24:07.391972+0000 osd.1 (osd.1) 115 : cluster [DBG] 2.3 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:38.454013+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 1040384 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:39.454154+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 999424 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:40.454267+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 751838 data_alloc: 218103808 data_used: 163840
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 999424 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fca6f000/0x0/0x4ffc00000, data 0x108628/0x1ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 109 handle_osd_map epochs [110,110], i have 109, src has [1,110]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 109 handle_osd_map epochs [110,110], i have 110, src has [1,110]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:41.454384+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72433664 unmapped: 991232 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:42.454515+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72433664 unmapped: 991232 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:43.454630+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 110 handle_osd_map epochs [111,112], i have 110, src has [1,112]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.227064133s of 11.310695648s, submitted: 29
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 112 heartbeat osd_stat(store_statfs(0x4fca6b000/0x0/0x4ffc00000, data 0x10a1ad/0x1af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72441856 unmapped: 983040 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:44.454784+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72450048 unmapped: 974848 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 112 heartbeat osd_stat(store_statfs(0x4fca68000/0x0/0x4ffc00000, data 0x10d8a7/0x1b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 112 handle_osd_map epochs [113,113], i have 112, src has [1,113]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 112 handle_osd_map epochs [113,113], i have 113, src has [1,113]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:45.454953+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 113 handle_osd_map epochs [113,114], i have 113, src has [1,114]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766036 data_alloc: 218103808 data_used: 163840
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 2007040 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:46.455114+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca61000/0x0/0x4ffc00000, data 0x110eaa/0x1bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 114 handle_osd_map epochs [115,115], i have 114, src has [1,115]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 114 handle_osd_map epochs [115,115], i have 115, src has [1,115]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 1998848 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:47.455241+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 115 handle_osd_map epochs [116,116], i have 115, src has [1,116]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 116 pg[9.1f(unlocked)] enter Initial
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=0 lpr=0 pi=[76,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000063 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=0 lpr=0 pi=[76,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=0 lpr=116 pi=[76,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000019 1 0.000039
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=0 lpr=116 pi=[76,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=0 lpr=116 pi=[76,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=0 lpr=116 pi=[76,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=0 lpr=116 pi=[76,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000015 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=0 lpr=116 pi=[76,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=0 lpr=116 pi=[76,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=0 lpr=116 pi=[76,116)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=0 lpr=116 pi=[76,116)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000122 1 0.000065
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=0 lpr=116 pi=[76,116)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=0 lpr=116 pi=[76,116)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000030 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=0 lpr=116 pi=[76,116)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000168 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=0 lpr=116 pi=[76,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72482816 unmapped: 1990656 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:48.455367+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 116 handle_osd_map epochs [116,117], i have 116, src has [1,117]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 116 handle_osd_map epochs [117,117], i have 117, src has [1,117]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=0 lpr=116 pi=[76,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.007651 2 0.000059
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=0 lpr=116 pi=[76,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.007851 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=0 lpr=116 pi=[76,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.007895 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=0 lpr=116 pi=[76,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000162 1 0.000218
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000043 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fca59000/0x0/0x4ffc00000, data 0x1144d2/0x1c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72491008 unmapped: 1982464 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:49.455499+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72499200 unmapped: 1974272 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 117 handle_osd_map epochs [118,118], i have 117, src has [1,118]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 118 pg[9.1f( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.927454 5 0.000149
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 118 pg[9.1f( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 118 pg[9.1f( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:50.455603+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 118 pg[9.1f( v 55'385 lc 55'105 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.006233 4 0.000134
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 118 pg[9.1f( v 55'385 lc 55'105 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 118 pg[9.1f( v 55'385 lc 55'105 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000083 1 0.000045
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 118 pg[9.1f( v 55'385 lc 55'105 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 118 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.043930 1 0.000039
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 118 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 118 handle_osd_map epochs [119,119], i have 118, src has [1,119]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.061012 1 0.000081
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.111436 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started 2.039016 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown mbc={}] exit Reset 0.000261 1 0.000417
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Start
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown mbc={}] exit Start 0.000072 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000067 1 0.000370
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: merge_log_dups log.dups.size()=0olog.dups.size()=11
Nov 24 18:51:54 compute-0 ceph-osd[89581]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=11
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000823 3 0.000155
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000016 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 791141 data_alloc: 218103808 data_used: 172032
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 901120 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:51.455735+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 119 handle_osd_map epochs [119,120], i have 119, src has [1,120]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 120 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.008310 2 0.000132
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 120 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.009431 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 120 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 120 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=119/120 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 120 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=119/120 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 120 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=119/120 n=5 ec=59/49 lis/c=119/76 les/c/f=120/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009504 3 0.000380
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 120 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=119/120 n=5 ec=59/49 lis/c=119/76 les/c/f=120/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 120 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=119/120 n=5 ec=59/49 lis/c=119/76 les/c/f=120/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 pg_epoch: 120 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=119/120 n=5 ec=59/49 lis/c=119/76 les/c/f=120/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 handle_osd_map epochs [120,120], i have 120, src has [1,120]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11945e/0x1cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 892928 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:52.455936+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 892928 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:53.456129+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:24:22.513458+0000 osd.1 (osd.1) 116 : cluster [DBG] 2.5 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:24:22.527621+0000 osd.1 (osd.1) 117 : cluster [DBG] 2.5 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 117) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:24:22.513458+0000 osd.1 (osd.1) 116 : cluster [DBG] 2.5 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:24:22.527621+0000 osd.1 (osd.1) 117 : cluster [DBG] 2.5 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 884736 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:54.456765+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.a scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.974453926s of 11.105629921s, submitted: 30
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.a scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 884736 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:55.456953+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 119 sent 117 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:24:24.563786+0000 osd.1 (osd.1) 118 : cluster [DBG] 2.a scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:24:24.577867+0000 osd.1 (osd.1) 119 : cluster [DBG] 2.a scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 119) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:24:24.563786+0000 osd.1 (osd.1) 118 : cluster [DBG] 2.a scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:24:24.577867+0000 osd.1 (osd.1) 119 : cluster [DBG] 2.a scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 794169 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 876544 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:56.457186+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 868352 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:57.457316+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.d scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.d scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 868352 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:58.457477+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:24:27.496966+0000 osd.1 (osd.1) 120 : cluster [DBG] 2.d scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:24:27.510980+0000 osd.1 (osd.1) 121 : cluster [DBG] 2.d scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 121) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:24:27.496966+0000 osd.1 (osd.1) 120 : cluster [DBG] 2.d scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:24:27.510980+0000 osd.1 (osd.1) 121 : cluster [DBG] 2.d scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 860160 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:59.457686+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 860160 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:00.457808+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 795316 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 860160 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:01.457946+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 851968 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:02.458064+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 851968 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:03.458205+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:24:32.485382+0000 osd.1 (osd.1) 122 : cluster [DBG] 5.9 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:24:32.499437+0000 osd.1 (osd.1) 123 : cluster [DBG] 5.9 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 123) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:24:32.485382+0000 osd.1 (osd.1) 122 : cluster [DBG] 5.9 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:24:32.499437+0000 osd.1 (osd.1) 123 : cluster [DBG] 5.9 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 843776 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:04.458594+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 843776 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:05.458733+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796463 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 835584 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:06.458851+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 835584 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:07.458957+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 827392 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:08.459068+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.15 deep-scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.899918556s of 13.920128822s, submitted: 6
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.15 deep-scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 827392 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:09.459202+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:24:38.484069+0000 osd.1 (osd.1) 124 : cluster [DBG] 2.15 deep-scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:24:38.498107+0000 osd.1 (osd.1) 125 : cluster [DBG] 2.15 deep-scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 125) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:24:38.484069+0000 osd.1 (osd.1) 124 : cluster [DBG] 2.15 deep-scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:24:38.498107+0000 osd.1 (osd.1) 125 : cluster [DBG] 2.15 deep-scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 827392 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:10.459444+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 797611 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 819200 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:11.459587+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 819200 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:12.459746+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 811008 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:13.459890+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 811008 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:14.460068+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 802816 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:15.460225+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:24:44.475271+0000 osd.1 (osd.1) 126 : cluster [DBG] 5.13 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:24:44.489356+0000 osd.1 (osd.1) 127 : cluster [DBG] 5.13 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 127) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:24:44.475271+0000 osd.1 (osd.1) 126 : cluster [DBG] 5.13 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:24:44.489356+0000 osd.1 (osd.1) 127 : cluster [DBG] 5.13 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 798759 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73678848 unmapped: 794624 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:16.460416+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73678848 unmapped: 794624 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.17 deep-scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:17.460512+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 1 last_log 128 sent 127 num 1 unsent 1 sending 1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:24:47.456811+0000 osd.1 (osd.1) 128 : cluster [DBG] 2.17 deep-scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.17 deep-scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 128) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:24:47.456811+0000 osd.1 (osd.1) 128 : cluster [DBG] 2.17 deep-scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 778240 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:18.460697+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 1 last_log 129 sent 128 num 1 unsent 1 sending 1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:24:47.470843+0000 osd.1 (osd.1) 129 : cluster [DBG] 2.17 deep-scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 129) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:24:47.470843+0000 osd.1 (osd.1) 129 : cluster [DBG] 2.17 deep-scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 778240 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:19.460947+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 770048 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:20.461095+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 799907 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 770048 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:21.461272+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.013246536s of 13.031503677s, submitted: 6
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 761856 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:22.461368+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:24:51.515560+0000 osd.1 (osd.1) 130 : cluster [DBG] 5.11 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:24:51.529680+0000 osd.1 (osd.1) 131 : cluster [DBG] 5.11 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 131) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:24:51.515560+0000 osd.1 (osd.1) 130 : cluster [DBG] 5.11 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:24:51.529680+0000 osd.1 (osd.1) 131 : cluster [DBG] 5.11 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 753664 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:23.461522+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 753664 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:24.461665+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 745472 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:25.461804+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:24:54.529922+0000 osd.1 (osd.1) 132 : cluster [DBG] 2.1b scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:24:54.544021+0000 osd.1 (osd.1) 133 : cluster [DBG] 2.1b scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 133) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:24:54.529922+0000 osd.1 (osd.1) 132 : cluster [DBG] 2.1b scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:24:54.544021+0000 osd.1 (osd.1) 133 : cluster [DBG] 2.1b scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 802203 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 745472 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:26.461984+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 745472 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:27.462122+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 737280 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:28.462216+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:24:57.474952+0000 osd.1 (osd.1) 134 : cluster [DBG] 5.16 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:24:57.488986+0000 osd.1 (osd.1) 135 : cluster [DBG] 5.16 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 135) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:24:57.474952+0000 osd.1 (osd.1) 134 : cluster [DBG] 5.16 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:24:57.488986+0000 osd.1 (osd.1) 135 : cluster [DBG] 5.16 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 729088 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:29.462427+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 720896 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:30.462557+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 803351 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 720896 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:31.462699+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.022245407s of 10.044643402s, submitted: 6
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 720896 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:32.462835+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:01.560236+0000 osd.1 (osd.1) 136 : cluster [DBG] 5.12 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:01.574315+0000 osd.1 (osd.1) 137 : cluster [DBG] 5.12 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 137) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:01.560236+0000 osd.1 (osd.1) 136 : cluster [DBG] 5.12 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:01.574315+0000 osd.1 (osd.1) 137 : cluster [DBG] 5.12 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 712704 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:33.463067+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:02.593363+0000 osd.1 (osd.1) 138 : cluster [DBG] 8.1 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:02.607441+0000 osd.1 (osd.1) 139 : cluster [DBG] 8.1 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 139) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:02.593363+0000 osd.1 (osd.1) 138 : cluster [DBG] 8.1 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:02.607441+0000 osd.1 (osd.1) 139 : cluster [DBG] 8.1 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 712704 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:34.463381+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 704512 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:35.463490+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 805646 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 696320 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:36.463631+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 696320 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:37.463782+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 141 sent 139 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:06.568805+0000 osd.1 (osd.1) 140 : cluster [DBG] 8.3 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:06.582693+0000 osd.1 (osd.1) 141 : cluster [DBG] 8.3 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 141) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:06.568805+0000 osd.1 (osd.1) 140 : cluster [DBG] 8.3 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:06.582693+0000 osd.1 (osd.1) 141 : cluster [DBG] 8.3 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 688128 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:38.463985+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 143 sent 141 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:07.613188+0000 osd.1 (osd.1) 142 : cluster [DBG] 8.5 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:07.627305+0000 osd.1 (osd.1) 143 : cluster [DBG] 8.5 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 143) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:07.613188+0000 osd.1 (osd.1) 142 : cluster [DBG] 8.5 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:07.627305+0000 osd.1 (osd.1) 143 : cluster [DBG] 8.5 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 679936 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:39.464147+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 679936 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:40.464265+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 807940 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 679936 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:41.464397+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 671744 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:42.464498+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 671744 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:43.464608+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.152551651s of 12.179207802s, submitted: 8
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 663552 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:44.464783+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 145 sent 143 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:13.739575+0000 osd.1 (osd.1) 144 : cluster [DBG] 8.7 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:13.753592+0000 osd.1 (osd.1) 145 : cluster [DBG] 8.7 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 145) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:13.739575+0000 osd.1 (osd.1) 144 : cluster [DBG] 8.7 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:13.753592+0000 osd.1 (osd.1) 145 : cluster [DBG] 8.7 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 663552 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:45.464962+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 147 sent 145 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:14.729686+0000 osd.1 (osd.1) 146 : cluster [DBG] 8.8 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:14.743869+0000 osd.1 (osd.1) 147 : cluster [DBG] 8.8 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.a scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.a scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 147) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:14.729686+0000 osd.1 (osd.1) 146 : cluster [DBG] 8.8 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:14.743869+0000 osd.1 (osd.1) 147 : cluster [DBG] 8.8 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 811381 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:46.465265+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 149 sent 147 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:15.764298+0000 osd.1 (osd.1) 148 : cluster [DBG] 8.a scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:15.778342+0000 osd.1 (osd.1) 149 : cluster [DBG] 8.a scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 638976 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 149) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:15.764298+0000 osd.1 (osd.1) 148 : cluster [DBG] 8.a scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:15.778342+0000 osd.1 (osd.1) 149 : cluster [DBG] 8.a scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:47.465542+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 638976 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:48.465710+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 151 sent 149 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:17.726812+0000 osd.1 (osd.1) 150 : cluster [DBG] 8.13 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:17.740923+0000 osd.1 (osd.1) 151 : cluster [DBG] 8.13 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 614400 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 151) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:17.726812+0000 osd.1 (osd.1) 150 : cluster [DBG] 8.13 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:17.740923+0000 osd.1 (osd.1) 151 : cluster [DBG] 8.13 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:49.465888+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 153 sent 151 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:18.685336+0000 osd.1 (osd.1) 152 : cluster [DBG] 8.16 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:18.699442+0000 osd.1 (osd.1) 153 : cluster [DBG] 8.16 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 598016 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 153) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:18.685336+0000 osd.1 (osd.1) 152 : cluster [DBG] 8.16 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:18.699442+0000 osd.1 (osd.1) 153 : cluster [DBG] 8.16 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:50.466140+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 155 sent 153 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:19.678238+0000 osd.1 (osd.1) 154 : cluster [DBG] 8.17 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:19.692369+0000 osd.1 (osd.1) 155 : cluster [DBG] 8.17 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 581632 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 155) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:19.678238+0000 osd.1 (osd.1) 154 : cluster [DBG] 8.17 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:19.692369+0000 osd.1 (osd.1) 155 : cluster [DBG] 8.17 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 814825 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:51.466325+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 581632 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:52.466495+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 581632 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:53.466617+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:22.627783+0000 osd.1 (osd.1) 156 : cluster [DBG] 8.19 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:22.641890+0000 osd.1 (osd.1) 157 : cluster [DBG] 8.19 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 573440 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 157) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:22.627783+0000 osd.1 (osd.1) 156 : cluster [DBG] 8.19 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:22.641890+0000 osd.1 (osd.1) 157 : cluster [DBG] 8.19 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:54.466891+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 573440 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.773813248s of 10.820886612s, submitted: 14
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:55.467083+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 159 sent 157 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:24.560242+0000 osd.1 (osd.1) 158 : cluster [DBG] 8.1e scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:24.574367+0000 osd.1 (osd.1) 159 : cluster [DBG] 8.1e scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 565248 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 159) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:24.560242+0000 osd.1 (osd.1) 158 : cluster [DBG] 8.1e scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:24.574367+0000 osd.1 (osd.1) 159 : cluster [DBG] 8.1e scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817121 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:56.467258+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 565248 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:57.467403+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:26.553620+0000 osd.1 (osd.1) 160 : cluster [DBG] 9.2 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:26.585357+0000 osd.1 (osd.1) 161 : cluster [DBG] 9.2 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 565248 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 161) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:26.553620+0000 osd.1 (osd.1) 160 : cluster [DBG] 9.2 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:26.585357+0000 osd.1 (osd.1) 161 : cluster [DBG] 9.2 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:58.467638+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 557056 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:59.467769+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 557056 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:00.467954+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:29.592002+0000 osd.1 (osd.1) 162 : cluster [DBG] 9.4 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:29.641373+0000 osd.1 (osd.1) 163 : cluster [DBG] 9.4 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 548864 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 163) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:29.592002+0000 osd.1 (osd.1) 162 : cluster [DBG] 9.4 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:29.641373+0000 osd.1 (osd.1) 163 : cluster [DBG] 9.4 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 819415 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:01.468123+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 548864 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:02.468278+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 548864 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:03.468421+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.a scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 532480 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.a scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:04.468587+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:33.506299+0000 osd.1 (osd.1) 164 : cluster [DBG] 9.a scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:33.552164+0000 osd.1 (osd.1) 165 : cluster [DBG] 9.a scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 524288 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 165) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:33.506299+0000 osd.1 (osd.1) 164 : cluster [DBG] 9.a scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:33.552164+0000 osd.1 (osd.1) 165 : cluster [DBG] 9.a scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:05.468856+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 516096 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.804003716s of 11.833882332s, submitted: 8
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 821710 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:06.468973+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 167 sent 165 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:36.394170+0000 osd.1 (osd.1) 166 : cluster [DBG] 9.10 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:36.415394+0000 osd.1 (osd.1) 167 : cluster [DBG] 9.10 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 499712 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 167) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:36.394170+0000 osd.1 (osd.1) 166 : cluster [DBG] 9.10 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:36.415394+0000 osd.1 (osd.1) 167 : cluster [DBG] 9.10 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:07.469116+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:37.431397+0000 osd.1 (osd.1) 168 : cluster [DBG] 9.12 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:37.459676+0000 osd.1 (osd.1) 169 : cluster [DBG] 9.12 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 499712 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 169) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:37.431397+0000 osd.1 (osd.1) 168 : cluster [DBG] 9.12 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:37.459676+0000 osd.1 (osd.1) 169 : cluster [DBG] 9.12 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:08.469252+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 491520 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:09.469368+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 483328 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:10.469468+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 1 last_log 170 sent 169 num 1 unsent 1 sending 1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:40.444564+0000 osd.1 (osd.1) 170 : cluster [DBG] 9.14 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 483328 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 170) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:40.444564+0000 osd.1 (osd.1) 170 : cluster [DBG] 9.14 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:11.469627+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 1 last_log 171 sent 170 num 1 unsent 1 sending 1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:40.476346+0000 osd.1 (osd.1) 171 : cluster [DBG] 9.14 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 824006 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 483328 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 171) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:40.476346+0000 osd.1 (osd.1) 171 : cluster [DBG] 9.14 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:12.469796+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 475136 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:13.469937+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 475136 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:14.470300+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 491520 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:15.470404+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 491520 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:16.470650+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 824006 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 491520 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:17.470831+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 483328 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:18.470977+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.071556091s of 12.114300728s, submitted: 6
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 483328 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:19.471132+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:48.508427+0000 osd.1 (osd.1) 172 : cluster [DBG] 9.1a scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:48.536686+0000 osd.1 (osd.1) 173 : cluster [DBG] 9.1a scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 475136 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 173) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:48.508427+0000 osd.1 (osd.1) 172 : cluster [DBG] 9.1a scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:48.536686+0000 osd.1 (osd.1) 173 : cluster [DBG] 9.1a scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:20.471394+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 466944 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:21.471539+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:50.528503+0000 osd.1 (osd.1) 174 : cluster [DBG] 11.5 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:50.542643+0000 osd.1 (osd.1) 175 : cluster [DBG] 11.5 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826302 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 466944 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 175) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:50.528503+0000 osd.1 (osd.1) 174 : cluster [DBG] 11.5 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:50.542643+0000 osd.1 (osd.1) 175 : cluster [DBG] 11.5 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:22.471797+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74014720 unmapped: 458752 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:23.471937+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:52.570332+0000 osd.1 (osd.1) 176 : cluster [DBG] 11.7 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:25:52.584450+0000 osd.1 (osd.1) 177 : cluster [DBG] 11.7 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74014720 unmapped: 458752 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 177) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:52.570332+0000 osd.1 (osd.1) 176 : cluster [DBG] 11.7 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:25:52.584450+0000 osd.1 (osd.1) 177 : cluster [DBG] 11.7 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:24.472125+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 450560 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:25.472300+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 450560 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:26.472461+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 827450 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 442368 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:27.472587+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 442368 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 442368 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:28.710746+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 434176 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:29.710950+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 434176 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:30.711099+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 827450 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 425984 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:31.711226+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 425984 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:32.711344+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.a scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.998170853s of 15.020214081s, submitted: 6
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.a scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 417792 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:33.711480+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:26:03.528743+0000 osd.1 (osd.1) 178 : cluster [DBG] 11.a scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:26:03.542875+0000 osd.1 (osd.1) 179 : cluster [DBG] 11.a scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 179) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:26:03.528743+0000 osd.1 (osd.1) 178 : cluster [DBG] 11.a scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:26:03.542875+0000 osd.1 (osd.1) 179 : cluster [DBG] 11.a scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 417792 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:34.711831+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 417792 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:35.712111+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828598 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 409600 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.c scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.c scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:36.712216+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:26:06.613494+0000 osd.1 (osd.1) 180 : cluster [DBG] 11.c scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:26:06.627570+0000 osd.1 (osd.1) 181 : cluster [DBG] 11.c scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 181) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:26:06.613494+0000 osd.1 (osd.1) 180 : cluster [DBG] 11.c scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:26:06.627570+0000 osd.1 (osd.1) 181 : cluster [DBG] 11.c scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 409600 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:37.712381+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 401408 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:38.712664+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:26:08.656680+0000 osd.1 (osd.1) 182 : cluster [DBG] 11.13 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:26:08.670846+0000 osd.1 (osd.1) 183 : cluster [DBG] 11.13 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 183) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:26:08.656680+0000 osd.1 (osd.1) 182 : cluster [DBG] 11.13 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:26:08.670846+0000 osd.1 (osd.1) 183 : cluster [DBG] 11.13 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 401408 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:39.712831+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:26:09.694320+0000 osd.1 (osd.1) 184 : cluster [DBG] 11.16 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:26:09.708368+0000 osd.1 (osd.1) 185 : cluster [DBG] 11.16 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 185) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:26:09.694320+0000 osd.1 (osd.1) 184 : cluster [DBG] 11.16 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:26:09.708368+0000 osd.1 (osd.1) 185 : cluster [DBG] 11.16 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 393216 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:40.712989+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 833193 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 393216 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:41.713096+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:26:10.717407+0000 osd.1 (osd.1) 186 : cluster [DBG] 11.1d scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:26:10.731552+0000 osd.1 (osd.1) 187 : cluster [DBG] 11.1d scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 187) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:26:10.717407+0000 osd.1 (osd.1) 186 : cluster [DBG] 11.1d scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:26:10.731552+0000 osd.1 (osd.1) 187 : cluster [DBG] 11.1d scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 393216 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.b scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.b scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:42.713257+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:26:12.667883+0000 osd.1 (osd.1) 188 : cluster [DBG] 10.b scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:26:12.681964+0000 osd.1 (osd.1) 189 : cluster [DBG] 10.b scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 189) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:26:12.667883+0000 osd.1 (osd.1) 188 : cluster [DBG] 10.b scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:26:12.681964+0000 osd.1 (osd.1) 189 : cluster [DBG] 10.b scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 385024 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.109220505s of 10.150906563s, submitted: 12
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:43.713426+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:26:13.679694+0000 osd.1 (osd.1) 190 : cluster [DBG] 10.12 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:26:13.693792+0000 osd.1 (osd.1) 191 : cluster [DBG] 10.12 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 191) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:26:13.679694+0000 osd.1 (osd.1) 190 : cluster [DBG] 10.12 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:26:13.693792+0000 osd.1 (osd.1) 191 : cluster [DBG] 10.12 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 376832 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:44.713793+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 368640 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:45.713946+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 193 sent 191 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:26:15.697514+0000 osd.1 (osd.1) 192 : cluster [DBG] 10.13 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:26:15.711697+0000 osd.1 (osd.1) 193 : cluster [DBG] 10.13 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 193) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:26:15.697514+0000 osd.1 (osd.1) 192 : cluster [DBG] 10.13 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:26:15.711697+0000 osd.1 (osd.1) 193 : cluster [DBG] 10.13 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836639 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 368640 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:46.714137+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 368640 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:47.714256+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 360448 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:48.714366+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 352256 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:49.714527+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 344064 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:50.714660+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836639 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 344064 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:51.714802+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 335872 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:52.714969+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 195 sent 193 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:26:22.683229+0000 osd.1 (osd.1) 194 : cluster [DBG] 10.1a scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:26:22.696845+0000 osd.1 (osd.1) 195 : cluster [DBG] 10.1a scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 195) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:26:22.683229+0000 osd.1 (osd.1) 194 : cluster [DBG] 10.1a scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:26:22.696845+0000 osd.1 (osd.1) 195 : cluster [DBG] 10.1a scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 319488 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:53.715169+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.945559502s of 10.964863777s, submitted: 6
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 319488 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:54.715296+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 197 sent 195 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:26:24.644607+0000 osd.1 (osd.1) 196 : cluster [DBG] 10.10 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:26:24.658836+0000 osd.1 (osd.1) 197 : cluster [DBG] 10.10 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 197) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:26:24.644607+0000 osd.1 (osd.1) 196 : cluster [DBG] 10.10 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:26:24.658836+0000 osd.1 (osd.1) 197 : cluster [DBG] 10.10 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 311296 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:55.715956+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838937 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 311296 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:56.716075+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 303104 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:57.716507+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 303104 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:58.716921+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 294912 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:59.717082+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 294912 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:00.718012+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.19 deep-scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.19 deep-scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 840086 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 294912 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:01.718142+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 199 sent 197 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:26:31.599193+0000 osd.1 (osd.1) 198 : cluster [DBG] 10.19 deep-scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:26:31.613311+0000 osd.1 (osd.1) 199 : cluster [DBG] 10.19 deep-scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 199) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:26:31.599193+0000 osd.1 (osd.1) 198 : cluster [DBG] 10.19 deep-scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:26:31.613311+0000 osd.1 (osd.1) 199 : cluster [DBG] 10.19 deep-scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 278528 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:02.718428+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 278528 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:03.718536+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 201 sent 199 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:26:33.625534+0000 osd.1 (osd.1) 200 : cluster [DBG] 10.6 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:26:33.639567+0000 osd.1 (osd.1) 201 : cluster [DBG] 10.6 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 201) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:26:33.625534+0000 osd.1 (osd.1) 200 : cluster [DBG] 10.6 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:26:33.639567+0000 osd.1 (osd.1) 201 : cluster [DBG] 10.6 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:04.718881+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 203 sent 201 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:26:34.589491+0000 osd.1 (osd.1) 202 : cluster [DBG] 10.2 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:26:34.603531+0000 osd.1 (osd.1) 203 : cluster [DBG] 10.2 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 262144 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 203) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:26:34.589491+0000 osd.1 (osd.1) 202 : cluster [DBG] 10.2 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:26:34.603531+0000 osd.1 (osd.1) 203 : cluster [DBG] 10.2 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:05.719169+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 262144 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.889883041s of 11.918990135s, submitted: 8
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 843531 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:06.719300+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 205 sent 203 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:26:36.563772+0000 osd.1 (osd.1) 204 : cluster [DBG] 10.14 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:26:36.581388+0000 osd.1 (osd.1) 205 : cluster [DBG] 10.14 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 253952 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 205) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:26:36.563772+0000 osd.1 (osd.1) 204 : cluster [DBG] 10.14 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:26:36.581388+0000 osd.1 (osd.1) 205 : cluster [DBG] 10.14 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:07.719464+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 207 sent 205 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:26:37.563450+0000 osd.1 (osd.1) 206 : cluster [DBG] 10.11 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:26:37.577322+0000 osd.1 (osd.1) 207 : cluster [DBG] 10.11 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 253952 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 207) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:26:37.563450+0000 osd.1 (osd.1) 206 : cluster [DBG] 10.11 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:26:37.577322+0000 osd.1 (osd.1) 207 : cluster [DBG] 10.11 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:08.719633+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 253952 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.f scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.f scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:09.719765+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 209 sent 207 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:26:39.605321+0000 osd.1 (osd.1) 208 : cluster [DBG] 10.f scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:26:39.619339+0000 osd.1 (osd.1) 209 : cluster [DBG] 10.f scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 245760 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 209) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:26:39.605321+0000 osd.1 (osd.1) 208 : cluster [DBG] 10.f scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:26:39.619339+0000 osd.1 (osd.1) 209 : cluster [DBG] 10.f scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:10.720324+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 211 sent 209 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:26:40.586595+0000 osd.1 (osd.1) 210 : cluster [DBG] 9.15 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:26:40.614889+0000 osd.1 (osd.1) 211 : cluster [DBG] 9.15 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 229376 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 211) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:26:40.586595+0000 osd.1 (osd.1) 210 : cluster [DBG] 9.15 scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:26:40.614889+0000 osd.1 (osd.1) 211 : cluster [DBG] 9.15 scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 846976 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:11.720772+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 221184 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:12.720920+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 221184 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:13.721068+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 212992 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:14.721229+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 212992 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:15.721491+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 212992 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 846976 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:16.721643+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 204800 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:17.721811+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 204800 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:18.721982+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 196608 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.1f deep-scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.983187675s of 13.045228958s, submitted: 8
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.1f deep-scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:19.722118+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  log_queue is 2 last_log 213 sent 211 num 2 unsent 2 sending 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:26:49.608736+0000 osd.1 (osd.1) 212 : cluster [DBG] 9.1f deep-scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  will send 2025-11-24T18:26:49.640470+0000 osd.1 (osd.1) 213 : cluster [DBG] 9.1f deep-scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 196608 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client handle_log_ack log(last 213) v1
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:26:49.608736+0000 osd.1 (osd.1) 212 : cluster [DBG] 9.1f deep-scrub starts
Nov 24 18:51:54 compute-0 ceph-osd[89581]: log_client  logged 2025-11-24T18:26:49.640470+0000 osd.1 (osd.1) 213 : cluster [DBG] 9.1f deep-scrub ok
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:20.722314+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 188416 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:21.722413+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 188416 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:22.722521+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 188416 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:23.722638+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 180224 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:24.722745+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 180224 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:25.722887+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 172032 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:26.723025+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 172032 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:27.723140+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 172032 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:28.723272+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 163840 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:29.723450+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 163840 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:30.723601+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 155648 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:31.723776+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 155648 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:32.723950+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 147456 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:33.724095+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 147456 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:34.724245+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 147456 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:35.724411+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 139264 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:36.724516+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 139264 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:37.724616+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 131072 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:38.724787+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 131072 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:39.724963+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 114688 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:40.725079+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 114688 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:41.725190+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 106496 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:42.725378+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 106496 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:43.725533+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 106496 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:44.725688+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 106496 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:45.725862+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 106496 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:46.726014+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 106496 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:47.726183+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 98304 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:48.726743+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 98304 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:49.726885+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 98304 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:50.727050+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 90112 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:51.727204+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 90112 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:52.727329+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 81920 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:53.727439+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 81920 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:54.727623+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 73728 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:55.727779+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 73728 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:56.727920+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 65536 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:57.728056+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 65536 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:58.728194+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 65536 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:59.728279+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 57344 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:00.728436+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 57344 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:01.728562+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 49152 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:02.728686+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 49152 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:03.728954+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 40960 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:04.729207+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 40960 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:05.729355+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 40960 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:06.729541+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 32768 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:07.729686+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 32768 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:08.729779+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 24576 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:09.729914+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 24576 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:10.730023+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 16384 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:11.730140+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 16384 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:12.730271+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 16384 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:13.730428+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 8192 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:14.730619+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 8192 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:15.730771+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 0 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:16.730885+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 0 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:17.731007+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 0 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:18.731214+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1040384 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:19.731329+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1040384 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:20.731451+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 1032192 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:21.731622+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 1032192 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:22.731765+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 1032192 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:23.731886+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 1024000 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:24.732126+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 1015808 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:25.732252+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 1007616 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:26.732388+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 1007616 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:27.732508+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 999424 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:28.732637+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 999424 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:29.732772+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 991232 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:30.732890+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 991232 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:31.733128+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 991232 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:32.733308+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 983040 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:33.733428+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 983040 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:34.733566+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 974848 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:35.733717+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 974848 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:36.733827+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 974848 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:37.734052+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 966656 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:38.734232+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 966656 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:39.734399+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 950272 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:40.734614+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 950272 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:41.734813+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 942080 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:42.735011+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 942080 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:43.735177+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 942080 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:44.735390+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74588160 unmapped: 933888 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:45.735576+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74588160 unmapped: 933888 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:46.735717+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 925696 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:47.735853+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 925696 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:48.735980+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 917504 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:49.736109+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 917504 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:50.736257+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 909312 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:51.736381+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 909312 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:52.736490+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 901120 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:53.736612+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 901120 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:54.736787+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 892928 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:55.736985+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 884736 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:56.737098+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 884736 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:57.737230+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 876544 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:58.737340+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 876544 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:59.737469+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 876544 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:00.737604+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 868352 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:01.737709+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 868352 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:02.737817+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 860160 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:03.737942+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 860160 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:04.738411+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 843776 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:05.738971+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 843776 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:06.740135+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 843776 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:07.740365+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 835584 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:08.741849+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 835584 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:09.742551+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 819200 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:10.743479+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 819200 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:11.743840+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 819200 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:12.744057+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 811008 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:13.744179+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 811008 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:14.744330+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 811008 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:15.744529+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 802816 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:16.745120+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 802816 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:17.745246+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 794624 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:18.745360+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 794624 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:19.745505+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 786432 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:20.745735+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 786432 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:21.746131+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 786432 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:22.746382+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 778240 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:23.746522+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 778240 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:24.746708+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 770048 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:25.746867+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 770048 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:26.747120+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 770048 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:27.747286+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74760192 unmapped: 761856 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:28.747482+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74760192 unmapped: 761856 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:29.747627+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 753664 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:30.747830+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 753664 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:31.747952+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 745472 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:32.748125+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 745472 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:33.748270+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 745472 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:34.748430+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 737280 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:35.748597+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 737280 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:36.748792+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 729088 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:37.748997+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 729088 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:38.749154+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 720896 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:39.749309+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 720896 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:40.749481+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 720896 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:41.749689+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 712704 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:42.749871+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 712704 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:43.750114+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 704512 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:44.750436+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 704512 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:45.750652+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 696320 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:46.750964+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 696320 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:47.751156+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 696320 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:48.751309+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 688128 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:49.751474+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 688128 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:50.751651+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 688128 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:51.751857+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 679936 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:52.752109+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 679936 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:53.752355+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74850304 unmapped: 671744 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:54.752628+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74858496 unmapped: 663552 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:55.752848+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 655360 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:56.753038+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 655360 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:57.753238+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 655360 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:58.753482+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74874880 unmapped: 647168 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:59.753688+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 655360 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:00.753996+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74874880 unmapped: 647168 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:01.754241+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74874880 unmapped: 647168 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:02.754455+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74874880 unmapped: 647168 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:03.754630+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74883072 unmapped: 638976 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:04.754821+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 622592 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:05.754993+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 622592 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:06.755163+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 614400 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:07.755424+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 614400 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:08.755595+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 606208 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:09.756041+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 606208 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:10.756458+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 598016 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:11.756794+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 598016 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:12.757022+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 598016 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:13.757329+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 589824 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:14.757642+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 589824 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:15.758005+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 581632 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:16.758262+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 581632 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:17.758534+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 573440 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:18.758821+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 573440 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:19.759006+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 573440 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:20.759157+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 565248 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:21.759372+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 565248 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:22.759527+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 557056 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:23.759695+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 557056 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:24.760011+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 548864 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:25.760174+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 548864 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:26.760309+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 548864 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:27.760479+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 540672 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:28.760602+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 540672 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:29.760744+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 540672 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:30.760869+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 532480 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:31.760977+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 532480 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:32.761183+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 524288 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:33.761326+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 524288 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:34.761483+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 516096 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:35.761652+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 516096 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:36.761815+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 516096 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:37.761965+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 507904 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:38.762074+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 507904 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:39.762194+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 499712 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:40.762312+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 499712 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:41.762444+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 491520 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:42.762671+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 491520 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:43.762922+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 491520 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:44.763123+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 475136 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:45.763301+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 475136 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:46.763425+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 466944 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:47.763589+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 466944 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:48.763746+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 458752 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:49.763858+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 458752 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:50.763988+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 458752 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:51.764151+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 450560 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:52.764289+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 450560 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:53.764427+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 434176 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:54.764585+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 434176 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:55.764703+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 434176 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:56.764871+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 425984 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:57.765039+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 425984 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:58.765180+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 417792 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:59.765310+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 417792 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:00.765426+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 417792 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:01.765706+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 409600 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:02.765850+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 409600 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:03.765980+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 401408 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:04.766120+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 401408 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:05.766302+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 401408 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:06.766453+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75128832 unmapped: 393216 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:07.766569+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75128832 unmapped: 393216 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:08.766687+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 385024 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:09.766837+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Cumulative writes: 6505 writes, 27K keys, 6505 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6505 writes, 1119 syncs, 5.81 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6505 writes, 27K keys, 6505 commit groups, 1.0 writes per commit group, ingest: 19.27 MB, 0.03 MB/s
                                           Interval WAL: 6505 writes, 1119 syncs, 5.81 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.365       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.365       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.36              0.00         1    0.365       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.055       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.055       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.055       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.10              0.00         1    0.095       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.10              0.00         1    0.095       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.10              0.00         1    0.095       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75202560 unmapped: 319488 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:10.766950+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 311296 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:11.767102+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 311296 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:12.767244+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 303104 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:13.767412+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 303104 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:14.767647+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 303104 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:15.767885+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 294912 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:16.768142+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 294912 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:17.768357+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 294912 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:18.768576+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 286720 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:19.768743+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 278528 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:20.768957+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 278528 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:21.769118+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 278528 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:22.769279+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 270336 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:23.769458+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 270336 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:24.769647+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 270336 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:25.769787+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75259904 unmapped: 262144 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:26.769916+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75259904 unmapped: 262144 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:27.770025+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 253952 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:28.770129+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 245760 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:29.770263+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 237568 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:30.770384+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 237568 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:31.770518+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 237568 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:32.770632+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 229376 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:33.770744+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:34.770916+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 229376 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:35.771064+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 221184 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:36.771208+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 221184 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:37.771350+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 221184 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:38.771474+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 212992 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:39.771653+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 221184 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:40.771785+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 221184 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:41.771971+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 221184 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:42.772077+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 221184 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:43.772195+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 212992 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:44.772344+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 212992 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:45.772413+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 204800 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:46.772555+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 204800 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:47.772705+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75325440 unmapped: 196608 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:48.772840+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75325440 unmapped: 196608 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:49.772983+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 188416 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:50.773149+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 188416 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:51.773249+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 180224 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:52.773397+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 180224 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:53.773565+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 180224 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:54.773741+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 172032 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:55.773862+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 180224 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:56.774000+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 180224 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:57.774113+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 172032 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:58.774234+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 172032 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:59.774368+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 163840 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:00.774486+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 163840 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:01.774622+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 155648 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:02.774739+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 155648 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:03.774956+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 147456 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:04.775139+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 147456 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:05.775258+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 147456 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:06.775404+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 139264 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:07.775522+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 139264 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:08.775628+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 131072 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:09.775765+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 131072 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:10.775873+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 131072 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:11.775971+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 122880 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:12.776100+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 122880 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:13.776203+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 114688 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:14.776400+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 114688 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:15.776521+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 106496 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:16.776732+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 106496 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:17.776934+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 106496 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:18.777120+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 98304 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:19.777278+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 98304 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:20.777476+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 98304 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:21.777609+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 90112 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:22.777751+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 90112 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:23.777818+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 81920 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:24.778092+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 81920 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:25.778217+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 81920 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:26.778343+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 73728 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:27.778445+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 73728 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:28.778558+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 65536 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:29.778664+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 310.267303467s of 310.278106689s, submitted: 2
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 57344 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:30.778765+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 2023424 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:31.778893+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 2015232 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:32.779075+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 2015232 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:33.779216+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 2015232 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:34.779380+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 2023424 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:35.779532+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 2023424 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:36.779648+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 2023424 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:37.779799+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 2023424 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:38.779936+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 2015232 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:39.780047+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 2015232 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:40.780181+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 2015232 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:41.780336+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75612160 unmapped: 2007040 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:42.780449+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75612160 unmapped: 2007040 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:43.780626+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75620352 unmapped: 1998848 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:44.780792+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 1990656 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:45.780937+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 1990656 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:46.781164+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 1982464 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:47.781321+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 1982464 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:48.781463+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 1974272 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:49.781649+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 1974272 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:50.781808+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 1974272 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:51.781937+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75653120 unmapped: 1966080 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:52.782045+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75653120 unmapped: 1966080 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:53.782192+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75661312 unmapped: 1957888 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:54.782336+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75661312 unmapped: 1957888 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:55.782571+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75669504 unmapped: 1949696 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:56.782841+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75669504 unmapped: 1949696 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:57.782991+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75669504 unmapped: 1949696 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:58.783118+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 1941504 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:59.783269+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 1941504 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:00.783410+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 1933312 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:01.783556+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 1933312 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:02.783756+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 1933312 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:03.783883+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 1925120 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:04.784038+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 1908736 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:05.784164+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 1900544 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:06.784277+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 1900544 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:07.784418+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 1900544 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:08.784571+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75726848 unmapped: 1892352 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:09.784709+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75726848 unmapped: 1892352 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:10.784861+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:11.785040+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:12.785164+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:13.785309+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:14.785464+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:15.785613+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:16.785750+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:17.785876+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:18.785966+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:19.786086+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:20.786332+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:21.786565+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:22.786743+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:23.786884+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:24.787252+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:25.787451+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:26.787631+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:27.787759+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:28.787886+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:29.788005+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 1875968 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:30.788139+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 1875968 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:31.788270+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 1875968 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:32.788404+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 1875968 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:33.788588+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 1875968 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:34.788755+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 1875968 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:35.788891+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1867776 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:36.789127+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1867776 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:37.789276+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1867776 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:38.789396+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1867776 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:39.789508+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1867776 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:40.789636+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1867776 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:41.789757+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1867776 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:42.789944+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1867776 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:43.790063+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1867776 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:44.790208+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1867776 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:45.790333+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:46.790445+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:47.790644+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:48.790811+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:49.790940+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:50.791071+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:51.791185+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:52.791313+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:53.791439+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:54.791588+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:55.791709+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:56.791841+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:57.791955+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:58.792061+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:59.792180+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:00.792286+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:01.792393+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:02.792515+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:03.792661+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:04.792804+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 1843200 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:05.792967+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 1843200 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:06.793134+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 1843200 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:07.793286+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 1835008 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:08.793458+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 1835008 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:09.793580+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 1835008 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:10.793758+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 1835008 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:11.793917+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 1835008 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:12.794005+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 1835008 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:13.794132+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 1835008 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:14.794311+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 1826816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:15.794438+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 1826816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:16.794562+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 1826816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:17.794730+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 1826816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:18.794855+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 1826816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:19.795004+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 1826816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:20.795108+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 1826816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:21.795268+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 1826816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:22.795390+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 1826816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:23.795529+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 1826816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:24.795730+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:25.795965+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:26.796089+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:27.796213+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:28.796331+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:29.796455+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:30.796580+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:31.796813+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:32.796979+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:33.797097+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:34.797242+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:35.797352+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:36.797469+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:37.797603+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:38.797741+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:39.797844+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:40.797956+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:41.798061+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:42.798170+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:43.798294+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:44.798450+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:45.798568+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:46.798696+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:47.798822+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:48.798938+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:49.799063+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:50.800425+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:51.800565+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:52.800694+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:53.800824+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:54.800934+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:55.801086+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:56.801238+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:57.801362+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:58.801550+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:59.801695+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:00.801844+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:01.801992+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:02.802139+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:03.802281+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:04.802436+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:05.802624+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:06.802767+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:07.802892+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:08.803053+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:09.803251+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 1802240 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:10.803393+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 1802240 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:11.803589+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:12.803697+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:13.803834+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:14.804042+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:15.804176+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:16.804292+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:17.804410+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:18.804557+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:19.804680+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:20.804859+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:21.805100+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:22.805331+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:23.805482+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:24.805633+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:25.805750+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:26.805938+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:27.806095+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:28.806230+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:29.806406+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:30.807396+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:31.808050+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:32.808213+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:33.808353+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 1785856 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:34.808528+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 1785856 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:35.808707+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 1785856 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:36.808848+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 1785856 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:37.808940+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 1785856 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:38.809057+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 1785856 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:39.809180+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1777664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:40.809291+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1777664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:41.809428+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1777664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:42.809547+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1777664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:43.809612+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1777664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:44.809766+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1777664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:45.809892+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1777664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:46.810049+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1777664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:47.810208+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1777664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:48.810372+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1777664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:49.812490+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1769472 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:50.812608+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1769472 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:51.812775+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1769472 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:52.812913+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1769472 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:53.813067+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:54.813219+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:55.813360+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:56.813488+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:57.813601+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:58.813783+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:59.813936+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:00.814062+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:01.814190+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:02.814524+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:03.814633+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:04.814866+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1753088 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:05.814986+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1753088 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:06.815208+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1753088 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:07.815365+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1753088 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:08.815498+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1753088 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:09.815641+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1744896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:10.815768+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1744896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:11.815972+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1744896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:12.816084+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1744896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:13.816244+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1744896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:14.816488+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1744896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:15.816655+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1744896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:16.816777+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1744896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:17.816923+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1744896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:18.817039+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1744896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:19.817145+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 ms_handle_reset con 0x560b41ff1400 session 0x560b413a9860
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b41ff1800
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 ms_handle_reset con 0x560b42032000 session 0x560b41b383c0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42d4cc00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:20.817799+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:21.818604+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:22.818740+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:23.818927+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:24.819083+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:25.819201+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:26.819304+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:27.819450+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:28.819602+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:29.819711+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:30.819837+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:31.819975+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:32.820097+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:33.820298+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:34.820530+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 1728512 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:35.820628+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 1728512 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:36.820775+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 1728512 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:37.820938+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 1728512 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:38.821049+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:39.821247+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:40.821403+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:41.821562+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:42.821711+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:43.821873+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:44.822088+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:45.822239+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:46.822790+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:47.822952+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:48.823178+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:49.823321+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:50.823460+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:51.823608+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:52.823729+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:53.823841+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:54.823974+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:55.824088+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:56.824202+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:57.824320+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:58.824444+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:59.824608+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:00.824771+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:01.824926+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:02.825041+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:03.825145+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:04.825299+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:05.825402+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:06.825504+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:07.825619+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:08.825750+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:09.825953+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:10.826084+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:11.826194+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:12.826331+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:13.826463+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:14.826603+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:15.826724+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:16.826880+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:17.827008+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:18.827126+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:19.827244+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:20.827379+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:21.827520+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:22.827664+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:23.827827+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:24.827960+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:25.828075+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:26.828176+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:27.828288+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:28.828394+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:29.828505+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:30.828602+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:31.828717+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:32.828877+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:33.828979+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:34.829153+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:35.829261+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:36.829388+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:37.829507+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:38.829618+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:39.829726+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:40.829864+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:41.829977+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:42.830081+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:43.830188+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:44.830436+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:45.830546+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:46.830654+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:47.830760+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:48.830888+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:49.830962+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:50.831093+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:51.831202+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:52.831309+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:53.831402+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:54.831559+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1671168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:55.831721+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1671168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:56.831847+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1671168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:57.831982+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1671168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:58.832115+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1671168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:59.832237+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1671168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:00.832856+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1671168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:01.832981+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1671168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:02.833093+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1671168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:03.833240+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1671168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:04.833422+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:05.833575+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:06.833764+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:07.833930+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:08.834093+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:09.834229+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:10.834367+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:11.834584+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:12.834684+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:13.834830+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:14.834977+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:15.835084+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:16.835189+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:17.835321+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:18.835429+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:19.835625+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:20.835783+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:21.836010+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:22.836128+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:23.836235+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:24.836391+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:25.836530+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:26.836639+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:27.836750+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:28.836868+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:29.836983+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:30.837118+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:31.837215+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:32.837350+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:33.837497+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:34.837649+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:35.838004+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:36.838111+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:37.838216+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:38.838363+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:39.838521+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:40.838654+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:41.838766+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:42.838873+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:43.838953+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:44.839118+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:45.839379+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:46.839523+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:47.839626+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:48.839801+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:49.839987+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:50.840505+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 1654784 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:51.840710+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 1654784 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:52.840883+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 1654784 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:53.841013+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 1654784 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:54.841330+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1646592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:55.841520+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1646592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:56.841658+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1646592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:57.841782+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1646592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:58.841944+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1646592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:59.842097+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1646592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:00.842291+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1646592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:01.842440+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1646592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:02.842586+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1646592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:03.842727+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1646592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:04.843090+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:05.872605+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:06.872733+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:07.872841+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:08.872980+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:09.873211+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:10.873365+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:11.873503+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:12.873662+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:13.873790+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:14.874022+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:15.874176+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:16.874299+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:17.874445+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:18.874520+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:19.874652+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:20.874777+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:21.874874+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:22.875010+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:23.875129+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:24.875275+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:25.875427+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:26.875553+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:27.875676+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:28.875818+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:29.875942+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:30.876062+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:31.876180+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:32.881251+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:33.881391+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:34.881557+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:35.881723+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:36.881843+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:37.881994+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:38.882133+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:39.882252+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:40.882372+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:41.882545+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:42.882732+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:43.882852+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 1622016 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:44.883038+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 1622016 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:45.883173+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 1622016 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:46.883332+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 1622016 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:47.883467+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 1622016 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:48.884084+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 1622016 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:49.884326+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:50.884468+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:51.884821+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:52.885038+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:53.885181+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:54.885647+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:55.885849+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:56.885965+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:57.886102+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:58.886240+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:59.886367+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:00.886626+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:01.886767+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:02.886942+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:03.887086+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:04.887237+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:05.887365+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:06.887534+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:07.887716+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:08.887956+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:09.888107+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:10.888209+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:11.888466+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:12.888609+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:13.888730+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:14.888864+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:15.889042+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:16.889159+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:17.889295+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:18.889466+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:19.889581+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:20.889709+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:21.889946+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:22.890147+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:23.890278+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:24.890427+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:25.890536+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:26.890661+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:27.890788+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:28.890958+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:29.891077+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:30.891193+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:31.891307+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 1605632 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:32.891422+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 1605632 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:33.891537+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:34.891699+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:35.891926+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:36.892119+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:37.892284+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:38.893111+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:39.893289+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:40.893436+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:41.893561+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:42.894159+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:43.894482+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:44.894721+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:45.894925+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:46.895047+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:47.895181+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:48.895341+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:49.895469+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:50.895580+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:51.895721+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:52.895860+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:53.895964+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:54.896100+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:55.896266+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:56.896395+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:57.896552+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:58.896683+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:59.896784+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:00.896957+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:01.897064+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:02.897195+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:03.897302+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:04.897441+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 1589248 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:05.897570+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 1589248 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:06.897685+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 1589248 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:07.897794+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 1589248 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:08.897948+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 1589248 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:09.898120+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Cumulative writes: 6685 writes, 27K keys, 6685 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 6685 writes, 1209 syncs, 5.53 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 271 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.365       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.365       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.36              0.00         1    0.365       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.055       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.055       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.055       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.10              0.00         1    0.095       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.10              0.00         1    0.095       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.10              0.00         1    0.095       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 1556480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:10.898318+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 1556480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:11.898419+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 1556480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:12.899523+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 1556480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:13.899682+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 1556480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:14.899950+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 1556480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:15.900072+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 1556480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:16.900198+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76070912 unmapped: 1548288 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:17.900317+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76070912 unmapped: 1548288 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:18.900604+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:19.900743+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:20.900857+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:21.900939+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:22.901058+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:23.901168+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:24.901301+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:25.901446+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:26.901562+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:27.901686+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:28.901805+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:29.901964+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:30.902088+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:31.902253+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Nov 24 18:51:54 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2561857792' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:32.902398+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:33.902491+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:34.902704+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:35.902800+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:36.902884+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:37.903006+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:38.903113+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:39.903234+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:40.903364+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:41.903516+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:42.903619+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:43.903738+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:44.903940+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:45.904063+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:46.904185+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:47.904298+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:48.904445+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:49.904573+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:50.904695+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:51.904819+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:52.904974+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:53.905094+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:54.905359+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:55.905479+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:56.905555+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:57.905680+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:58.905808+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:59.905937+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:00.906091+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:01.906219+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:02.906364+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:03.906539+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:04.906711+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 1507328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:05.906851+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 1507328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:06.906999+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 1507328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:07.907126+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:08.907255+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:09.907358+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:10.907475+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:11.907601+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:12.907731+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:13.907847+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:14.908015+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:15.908213+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:16.908347+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:17.908519+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:18.908627+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:19.908734+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:20.908851+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:21.908971+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:22.909130+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:23.909312+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:24.909474+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:25.909597+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:26.909721+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:27.909857+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:28.958821+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:29.958978+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 599.852172852s of 600.152648926s, submitted: 90
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:30.959100+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 1490944 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:31.959220+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:32.959343+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:33.959500+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:34.959723+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:35.959870+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:36.959983+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:37.960125+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:38.960289+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:39.960416+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:40.960555+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:41.960684+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:42.960839+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:43.960963+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:44.961172+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:45.961359+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:46.961520+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:47.961681+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:48.961804+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:49.961996+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:50.962150+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:51.962266+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:52.962426+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:53.962528+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:54.962717+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:55.962863+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:56.963033+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:57.963189+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:58.963327+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:59.963473+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:00.963735+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:01.963888+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:02.964329+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:03.964966+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:04.965109+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:05.965220+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:06.965383+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:07.965557+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:08.965702+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:09.965820+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:10.965976+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:11.966117+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:12.966240+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:13.966362+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:14.966504+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:15.966626+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:16.966736+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:17.966840+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:18.966955+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:19.967080+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:20.967221+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:21.967391+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:22.967556+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:23.967741+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:24.967947+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:25.968086+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:26.968213+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:27.968346+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:28.968502+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:29.968663+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:30.968808+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:31.968954+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:32.969121+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:33.969272+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:34.969448+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:35.969576+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:36.969728+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:37.969854+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:38.970076+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:39.970235+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:40.970395+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:41.970510+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:42.970671+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:43.970837+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:44.971017+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:45.971114+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:46.971252+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:47.971428+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:48.971573+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:49.971749+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:50.971885+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:51.972084+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:52.972255+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:53.972380+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:54.972591+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:55.972734+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:56.972869+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:57.972980+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:58.973083+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:59.973246+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:00.973421+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:01.973584+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:02.973713+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:03.973857+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:04.974078+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:05.974292+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:06.974701+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:07.975109+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:08.975448+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:09.975717+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:10.975892+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:11.976069+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:12.976307+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:13.976520+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:14.976779+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:15.976991+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:16.977165+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:17.977377+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:18.977559+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:19.977783+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:20.977978+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:21.978120+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:22.978300+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:23.978494+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:24.978667+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:25.978833+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:26.978957+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:27.979108+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:28.979270+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:29.979430+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:30.979600+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:31.979757+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:32.979984+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:33.980117+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:34.980321+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:35.980523+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:36.980747+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:37.980952+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:38.982839+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:39.984272+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:40.985413+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:41.986213+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:42.986755+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:43.987031+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:44.987190+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:45.988359+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:46.989413+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:47.990148+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:48.991034+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:49.991780+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:50.992491+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:51.993195+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:52.993788+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:53.994331+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:54.994873+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 1466368 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:55.995026+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 1466368 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:56.995283+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 1466368 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:57.995421+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 1458176 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:58.995675+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 1458176 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:59.995830+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 1458176 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:00.995976+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 1458176 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:01.996110+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 1458176 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:02.996269+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 1458176 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:03.996508+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 1458176 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:04.996662+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:05.996964+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:06.997235+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:07.997479+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:08.997603+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:09.997772+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:10.997958+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:11.998116+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:12.998519+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:13.998739+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:14.999386+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:15.999780+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:17.000299+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:18.000791+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:19.001009+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:20.001176+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:21.001343+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:22.001475+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:23.001969+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:24.002164+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:25.002579+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:26.002806+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:27.002937+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:28.003233+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:29.003344+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:30.003495+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:31.003704+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:32.003837+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:33.003980+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:34.004098+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:35.004331+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:36.004544+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:37.004723+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:38.004863+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:39.005049+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:40.005190+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:41.005335+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:42.005681+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:43.005832+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:44.005984+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:45.006363+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:46.007112+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:47.007276+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:48.007548+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:49.007713+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:50.007847+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:51.008003+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:52.008247+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:53.008501+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:54.008726+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:55.008993+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:56.009131+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:57.009401+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:58.009627+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:59.009802+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:00.009956+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:01.010159+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:02.010435+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:03.010696+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:04.010982+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:05.011248+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:06.011433+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:07.011590+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:08.011720+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:09.011933+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:10.012061+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:11.012172+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:12.012299+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:13.012428+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:14.012595+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:15.012798+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:16.012960+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:17.013121+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:18.013267+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:19.013470+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:20.013644+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:21.013865+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:22.014026+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:23.014201+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:24.014363+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:25.014498+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:26.014634+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:27.014810+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:28.014953+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:29.015137+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:30.015311+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:31.015429+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:32.015537+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:33.015637+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:34.015806+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:35.015977+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:36.016138+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:37.016291+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:38.016441+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:39.016608+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:40.016861+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:41.017076+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:42.017228+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:43.017407+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:44.017527+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:45.017676+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:46.017859+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:47.018029+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:48.018159+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:49.018334+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:50.018483+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:51.018657+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:52.018799+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:53.018975+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:54.019127+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:55.019305+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:56.019486+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:57.019727+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:58.019936+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:59.020153+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:00.020309+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:01.020490+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:02.020618+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:03.020779+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:04.020922+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:05.021075+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:06.021216+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:07.021379+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:08.021547+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:09.021729+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:10.021868+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:11.022003+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:12.022158+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:13.022380+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:14.022511+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:15.022752+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:16.022928+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:17.023047+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:18.023187+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:19.023326+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:20.023483+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:21.024415+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:22.026589+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:23.026870+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:24.027275+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:25.028584+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:26.029200+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:27.029350+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:28.029765+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:29.030067+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:30.030283+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:31.030424+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:32.030596+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:33.030751+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:34.031187+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:35.031394+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:36.031621+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:37.031768+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:38.032118+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:39.032322+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:40.032648+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:41.032965+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:42.033253+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:43.033495+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:44.033742+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:45.033967+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:46.034174+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:47.034374+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:48.034590+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:49.034758+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:50.034928+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:51.035088+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:52.035243+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:53.035418+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:54.035547+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:55.035730+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:56.035847+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:57.036016+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:58.036187+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:59.036357+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:00.036497+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:01.036615+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:02.036777+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:03.036964+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:04.037158+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:05.037338+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:06.037479+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:07.037598+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:08.037806+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:09.037985+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:10.038191+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:11.038405+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:12.038579+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:13.038740+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:14.039082+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:15.039234+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:16.039390+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:17.039659+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:18.039854+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:19.039989+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:20.040135+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:21.040308+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:22.040425+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:23.040553+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:24.040668+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:25.040845+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:26.041021+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:27.041350+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:28.041489+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:29.042956+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:30.044101+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:31.044983+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:32.045954+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:33.046210+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:34.046379+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:35.046660+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:36.047193+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:37.047319+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:38.047498+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:39.047748+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:40.048000+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:41.048125+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:42.048461+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:43.048782+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43e19800
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:44.048993+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 handle_osd_map epochs [120,121], i have 120, src has [1,121]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 373.959564209s of 374.288970947s, submitted: 90
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 120 handle_osd_map epochs [121,121], i have 121, src has [1,121]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1351680 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:45.049284+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 121 handle_osd_map epochs [122,122], i have 121, src has [1,122]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 1277952 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:46.049534+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 860549 data_alloc: 218103808 data_used: 184320
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca47000/0x0/0x4ffc00000, data 0x11e601/0x1d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 1277952 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:47.049703+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 1277952 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 122 handle_osd_map epochs [122,123], i have 122, src has [1,123]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 122 handle_osd_map epochs [123,123], i have 123, src has [1,123]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 123 ms_handle_reset con 0x560b43e19800 session 0x560b44f00d20
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:48.049958+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1261568 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42033000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:49.050084+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 123 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x12019a/0x1d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 17907712 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 124 ms_handle_reset con 0x560b42033000 session 0x560b44f1d0e0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:50.050276+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:51.050491+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978265 data_alloc: 218103808 data_used: 188416
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:52.050703+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fba3f000/0x0/0x4ffc00000, data 0x1121d56/0x11dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:53.051001+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fba3f000/0x0/0x4ffc00000, data 0x1121d56/0x11dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:54.051146+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:55.051347+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:56.051535+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980391 data_alloc: 218103808 data_used: 188416
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:57.051660+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba3d000/0x0/0x4ffc00000, data 0x11237b9/0x11e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:58.051821+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:59.051985+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:00.052198+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:01.052435+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980391 data_alloc: 218103808 data_used: 188416
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba3d000/0x0/0x4ffc00000, data 0x11237b9/0x11e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:02.052640+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba3d000/0x0/0x4ffc00000, data 0x11237b9/0x11e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:03.052851+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba3d000/0x0/0x4ffc00000, data 0x11237b9/0x11e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:04.053016+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:05.053232+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:06.053377+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980391 data_alloc: 218103808 data_used: 188416
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:07.053587+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba3d000/0x0/0x4ffc00000, data 0x11237b9/0x11e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:08.053774+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:09.053979+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba3d000/0x0/0x4ffc00000, data 0x11237b9/0x11e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:10.054119+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:11.054274+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980391 data_alloc: 218103808 data_used: 188416
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba3d000/0x0/0x4ffc00000, data 0x11237b9/0x11e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:12.054467+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:13.054601+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:14.054761+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba3d000/0x0/0x4ffc00000, data 0x11237b9/0x11e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:15.054969+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:16.055165+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980391 data_alloc: 218103808 data_used: 188416
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:17.055338+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba3d000/0x0/0x4ffc00000, data 0x11237b9/0x11e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:18.055577+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba3d000/0x0/0x4ffc00000, data 0x11237b9/0x11e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:19.055766+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:20.055892+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba3d000/0x0/0x4ffc00000, data 0x11237b9/0x11e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:21.056052+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42033400
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.974910736s of 36.823040009s, submitted: 49
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984850 data_alloc: 218103808 data_used: 188416
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 17891328 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:22.056199+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 125 handle_osd_map epochs [125,126], i have 125, src has [1,126]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 126 ms_handle_reset con 0x560b42033400 session 0x560b44f1dc20
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fba3c000/0x0/0x4ffc00000, data 0x1123fc9/0x11e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 16834560 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fba37000/0x0/0x4ffc00000, data 0x1125b69/0x11e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:23.056345+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b41ff1c00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 16818176 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:24.056524+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 127 ms_handle_reset con 0x560b41ff1c00 session 0x560b44f30f00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 15769600 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:25.056737+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fba37000/0x0/0x4ffc00000, data 0x1126f07/0x11e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 15769600 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:26.056881+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42033000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994861 data_alloc: 218103808 data_used: 196608
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 15753216 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:27.057102+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 128 ms_handle_reset con 0x560b42033000 session 0x560b44f00d20
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 15728640 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:28.057294+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 15728640 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:29.057441+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42d4dc00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 15720448 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:30.057561+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43e19800
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 87212032 unmapped: 15589376 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 128 heartbeat osd_stat(store_statfs(0x4faa31000/0x0/0x4ffc00000, data 0x2129478/0x21ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:31.057785+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 128 handle_osd_map epochs [128,129], i have 128, src has [1,129]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.644784927s of 10.040460587s, submitted: 85
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 129 ms_handle_reset con 0x560b43e19800 session 0x560b44e890e0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223427 data_alloc: 218103808 data_used: 208896
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43cae000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 23822336 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:32.057977+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43cb7000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 130 ms_handle_reset con 0x560b42d4dc00 session 0x560b4295e3c0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 130 ms_handle_reset con 0x560b43cae000 session 0x560b44e892c0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b41ff1c00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 23724032 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 130 handle_osd_map epochs [130,131], i have 130, src has [1,131]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 131 ms_handle_reset con 0x560b43cb7000 session 0x560b44e890e0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 131 ms_handle_reset con 0x560b41ff1c00 session 0x560b439f4780
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:33.058142+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42033000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42d4dc00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43e19800
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 131 ms_handle_reset con 0x560b43e19800 session 0x560b44f1dc20
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43e2b000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 131 ms_handle_reset con 0x560b43e2b000 session 0x560b41f812c0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 22773760 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:34.058257+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 132 ms_handle_reset con 0x560b42d4dc00 session 0x560b44d8c000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42d4dc00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 132 ms_handle_reset con 0x560b42033000 session 0x560b4221d4a0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b41ff1c00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 22708224 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:35.058502+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43cb7000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb21d000/0x0/0x4ffc00000, data 0x1131362/0x11fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 133 ms_handle_reset con 0x560b41ff1c00 session 0x560b439fc3c0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 133 ms_handle_reset con 0x560b42d4dc00 session 0x560b44d8d680
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 133 ms_handle_reset con 0x560b43cb7000 session 0x560b44f1c960
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43e19800
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 133 ms_handle_reset con 0x560b43e19800 session 0x560b44569c20
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b41ff1c00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42033000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 133 ms_handle_reset con 0x560b42033000 session 0x560b4295e3c0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 22740992 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:36.058645+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 134 ms_handle_reset con 0x560b41ff1c00 session 0x560b41f8e3c0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43cb7000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041733 data_alloc: 218103808 data_used: 241664
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 22732800 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:37.059161+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 135 ms_handle_reset con 0x560b43cb7000 session 0x560b420e7c20
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 22700032 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43e19800
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:38.059400+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 136 ms_handle_reset con 0x560b43e19800 session 0x560b4221cb40
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fba12000/0x0/0x4ffc00000, data 0x1137add/0x1208000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 22618112 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:39.059633+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43e2b000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43ae1c00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 22609920 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:40.059816+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 137 ms_handle_reset con 0x560b43e2b000 session 0x560b44ce52c0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 137 ms_handle_reset con 0x560b43ae1c00 session 0x560b44568960
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 80224256 unmapped: 22577152 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:41.060012+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b41ff1c00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.287572861s of 10.136335373s, submitted: 244
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051198 data_alloc: 218103808 data_used: 249856
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 80224256 unmapped: 22577152 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:42.060126+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 138 ms_handle_reset con 0x560b41ff1c00 session 0x560b44d525a0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fba0f000/0x0/0x4ffc00000, data 0x113b2dd/0x120d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 22519808 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42033000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:43.060267+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 139 handle_osd_map epochs [139,140], i have 139, src has [1,140]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 21413888 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:44.060381+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 140 ms_handle_reset con 0x560b42033000 session 0x560b446f10e0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43cb7000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 21372928 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fba08000/0x0/0x4ffc00000, data 0x113e1d0/0x1212000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:45.060549+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 141 ms_handle_reset con 0x560b43cb7000 session 0x560b44d8c3c0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43e19800
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 141 ms_handle_reset con 0x560b43e19800 session 0x560b4214bc20
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b41ff1c00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81510400 unmapped: 21291008 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42033000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43ae1c00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:46.060701+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 142 ms_handle_reset con 0x560b43ae1c00 session 0x560b4463ef00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063961 data_alloc: 218103808 data_used: 266240
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43cb7000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81559552 unmapped: 21241856 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:47.061176+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fba04000/0x0/0x4ffc00000, data 0x1141b0a/0x1217000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 143 ms_handle_reset con 0x560b42033000 session 0x560b44d8c000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 143 ms_handle_reset con 0x560b41ff1c00 session 0x560b44f31c20
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43e21800
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 143 ms_handle_reset con 0x560b43e21800 session 0x560b44f314a0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 143 ms_handle_reset con 0x560b43cb7000 session 0x560b4463f4a0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 21209088 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:48.061395+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 21159936 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:49.061559+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 21159936 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:50.061760+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 21159936 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:51.062007+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1071603 data_alloc: 218103808 data_used: 274432
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 21143552 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:52.062263+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 21143552 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:53.062396+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb5f0000/0x0/0x4ffc00000, data 0x1144a42/0x121d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.284852028s of 11.915773392s, submitted: 186
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 21143552 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:54.062534+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 21143552 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:55.062787+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b41ff1c00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 145 ms_handle_reset con 0x560b41ff1c00 session 0x560b44d8c5a0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42033000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 145 ms_handle_reset con 0x560b42033000 session 0x560b44ce4960
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 21143552 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:56.063016+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43ae1c00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 145 ms_handle_reset con 0x560b43ae1c00 session 0x560b447c4780
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43e21800
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074577 data_alloc: 218103808 data_used: 274432
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fb5ed000/0x0/0x4ffc00000, data 0x114653d/0x1220000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 21135360 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:57.063230+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fb5ed000/0x0/0x4ffc00000, data 0x114653d/0x1220000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81690624 unmapped: 21110784 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:58.063442+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 147 ms_handle_reset con 0x560b43e21800 session 0x560b421bcf00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 21094400 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:59.063577+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b428f6000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43cafc00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81715200 unmapped: 21086208 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:00.063764+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb5e6000/0x0/0x4ffc00000, data 0x1149c8b/0x1226000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 21069824 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:01.064008+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42e88400
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082986 data_alloc: 218103808 data_used: 278528
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 21069824 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:02.064164+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42e88800
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 148 ms_handle_reset con 0x560b42e88800 session 0x560b44f1d0e0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 148 ms_handle_reset con 0x560b42e88400 session 0x560b4463fa40
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 21053440 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:03.064397+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b41ff1c00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.870351791s of 10.022338867s, submitted: 58
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 149 ms_handle_reset con 0x560b41ff1c00 session 0x560b4463fe00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42033000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81772544 unmapped: 21028864 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 149 ms_handle_reset con 0x560b42033000 session 0x560b44df10e0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:04.064521+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43ae1c00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 149 ms_handle_reset con 0x560b43ae1c00 session 0x560b44ef4000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fb5e0000/0x0/0x4ffc00000, data 0x114d3f7/0x122d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 149 handle_osd_map epochs [150,150], i have 150, src has [1,150]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 21061632 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:05.064693+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43e21800
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb5e0000/0x0/0x4ffc00000, data 0x114d3f7/0x122d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 21045248 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:06.064821+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 151 ms_handle_reset con 0x560b43e21800 session 0x560b44ef5860
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095676 data_alloc: 218103808 data_used: 286720
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b41ff1c00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 19996672 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5dd000/0x0/0x4ffc00000, data 0x1150b87/0x1231000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:07.065359+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 152 ms_handle_reset con 0x560b41ff1c00 session 0x560b44f1ed20
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82821120 unmapped: 19980288 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:08.066126+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fb5d8000/0x0/0x4ffc00000, data 0x1152b3c/0x1235000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82821120 unmapped: 19980288 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:09.066580+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 19963904 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:10.066724+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 19963904 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fb5d8000/0x0/0x4ffc00000, data 0x1152b3c/0x1235000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:11.067015+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103151 data_alloc: 218103808 data_used: 294912
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 19963904 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:12.067166+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42033000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 152 ms_handle_reset con 0x560b42033000 session 0x560b44df0b40
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42e88400
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 152 ms_handle_reset con 0x560b42e88400 session 0x560b4295e3c0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43ae1c00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 152 ms_handle_reset con 0x560b43ae1c00 session 0x560b44d8cd20
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b406a1c00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 152 ms_handle_reset con 0x560b406a1c00 session 0x560b44d8cf00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b41ff1c00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42033000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 153 ms_handle_reset con 0x560b42033000 session 0x560b41f834a0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 153 ms_handle_reset con 0x560b41ff1c00 session 0x560b44f31e00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42e88400
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 19939328 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 153 ms_handle_reset con 0x560b42e88400 session 0x560b44df10e0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43ae1c00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:13.067298+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 153 ms_handle_reset con 0x560b43ae1c00 session 0x560b44ef5860
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42d4dc00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 153 ms_handle_reset con 0x560b42d4dc00 session 0x560b4545c000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b41ff1c00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 153 ms_handle_reset con 0x560b41ff1c00 session 0x560b4545c780
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fb5d8000/0x0/0x4ffc00000, data 0x1152b3c/0x1235000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 20160512 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:14.067548+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42033000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.832967758s of 11.620203972s, submitted: 124
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 19972096 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fb5d5000/0x0/0x4ffc00000, data 0x11545d7/0x1238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:15.067715+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 19972096 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:16.067966+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1107738 data_alloc: 218103808 data_used: 299008
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 19972096 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:17.068141+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42e88400
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 153 ms_handle_reset con 0x560b42e88400 session 0x560b44f1c3c0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43ae1c00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 154 ms_handle_reset con 0x560b43ae1c00 session 0x560b447c4f00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43cba800
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43b03400
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 19963904 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:18.068252+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 154 ms_handle_reset con 0x560b43cba800 session 0x560b44755e00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 154 ms_handle_reset con 0x560b43b03400 session 0x560b421bcf00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43b03400
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 154 ms_handle_reset con 0x560b43b03400 session 0x560b44f30f00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b41ff1c00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fb5d6000/0x0/0x4ffc00000, data 0x11545d7/0x1238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 155 ms_handle_reset con 0x560b41ff1c00 session 0x560b44df03c0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43ae1400
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 19931136 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:19.068370+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 156 ms_handle_reset con 0x560b43ae1400 session 0x560b42803c20
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 19898368 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:20.068480+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43ae1c00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 156 ms_handle_reset con 0x560b43ae1c00 session 0x560b44d8de00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43cba800
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 156 ms_handle_reset con 0x560b43cba800 session 0x560b44d8c780
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82894848 unmapped: 19906560 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b41ff1c00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:21.068576+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 156 ms_handle_reset con 0x560b41ff1c00 session 0x560b42cebe00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb5ca000/0x0/0x4ffc00000, data 0x11598ce/0x1242000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120481 data_alloc: 218103808 data_used: 315392
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82894848 unmapped: 19906560 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:22.068725+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43ae1400
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 156 ms_handle_reset con 0x560b43ae1400 session 0x560b44ef4960
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82919424 unmapped: 19881984 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:23.068852+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43ae1c00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb5cc000/0x0/0x4ffc00000, data 0x11598ce/0x1242000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [0,0,0,0,1])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 18833408 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:24.068993+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 157 ms_handle_reset con 0x560b43ae1c00 session 0x560b44f1c000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:25.069188+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82935808 unmapped: 19865600 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.018726349s of 10.783482552s, submitted: 110
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 157 ms_handle_reset con 0x560b42033000 session 0x560b4545d2c0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43b03400
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:26.069299+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 19849216 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125112 data_alloc: 218103808 data_used: 323584
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:27.069417+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82960384 unmapped: 19841024 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 158 ms_handle_reset con 0x560b43b03400 session 0x560b4545de00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 158 handle_osd_map epochs [158,159], i have 158, src has [1,159]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:28.069550+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82968576 unmapped: 19832832 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 159 heartbeat osd_stat(store_statfs(0x4fb5c7000/0x0/0x4ffc00000, data 0x115d098/0x1247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:29.069685+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82984960 unmapped: 19816448 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:30.069968+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82984960 unmapped: 19816448 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 159 ms_handle_reset con 0x560b428f6000 session 0x560b44d8d2c0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 159 ms_handle_reset con 0x560b43cafc00 session 0x560b4221d4a0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:31.070121+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82984960 unmapped: 19816448 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b41ff1c00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 159 ms_handle_reset con 0x560b41ff1c00 session 0x560b41f82d20
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128896 data_alloc: 218103808 data_used: 335872
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 159 heartbeat osd_stat(store_statfs(0x4fb5c4000/0x0/0x4ffc00000, data 0x115eb17/0x124a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:32.070341+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 19791872 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42033000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 159 ms_handle_reset con 0x560b42033000 session 0x560b445cc960
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43ae1400
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 159 ms_handle_reset con 0x560b43ae1400 session 0x560b4281fc20
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:33.070487+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 19791872 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:34.070677+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83025920 unmapped: 19775488 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5c2000/0x0/0x4ffc00000, data 0x11606ed/0x124b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:35.070976+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83025920 unmapped: 19775488 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5c2000/0x0/0x4ffc00000, data 0x11606ed/0x124b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:36.071192+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83025920 unmapped: 19775488 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1131316 data_alloc: 218103808 data_used: 335872
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:37.071366+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83025920 unmapped: 19775488 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 160 handle_osd_map epochs [160,161], i have 160, src has [1,161]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.780132294s of 12.234910965s, submitted: 104
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:38.071536+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:39.071724+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:40.071865+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:41.072066+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:42.072172+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:43.072325+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:44.072495+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:45.072679+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:46.072863+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:47.072985+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:48.073171+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:49.073330+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:50.073491+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:51.073617+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:52.073775+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:53.073940+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:54.074097+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:55.074306+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:56.074459+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:57.074625+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:58.074783+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:59.074963+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:00.075187+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:01.075353+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:02.075541+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:03.075697+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:04.075876+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:05.076083+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:06.076216+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:07.076363+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:08.076500+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:09.076655+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:10.076884+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1801.0 total, 600.0 interval
                                           Cumulative writes: 8591 writes, 32K keys, 8591 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 8591 writes, 2012 syncs, 4.27 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1906 writes, 4972 keys, 1906 commit groups, 1.0 writes per commit group, ingest: 2.41 MB, 0.00 MB/s
                                           Interval WAL: 1906 writes, 803 syncs, 2.37 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:11.077023+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:12.077165+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:13.077300+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:14.077462+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:15.077659+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:16.077953+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:17.078063+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:18.078202+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:19.078317+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: mgrc ms_handle_reset ms_handle_reset con 0x560b41415c00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/536471675
Nov 24 18:51:54 compute-0 ceph-osd[89581]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/536471675,v1:192.168.122.100:6801/536471675]
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: get_auth_request con 0x560b43ae1400 auth_method 0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: mgrc handle_mgr_configure stats_period=5
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 161 ms_handle_reset con 0x560b41ff0800 session 0x560b413a9680
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b452a2000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:20.078496+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 161 ms_handle_reset con 0x560b41ff1800 session 0x560b44d52f00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b452a2400
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 161 ms_handle_reset con 0x560b42d4cc00 session 0x560b4278c000
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b452a2800
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:21.078796+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:22.078953+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:23.079094+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:24.079215+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:25.079367+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:26.079564+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:27.079721+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:28.079964+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:29.080165+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:30.080363+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:31.080508+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:32.080664+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:33.080846+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:34.081057+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:35.081332+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:36.081523+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:37.081663+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:38.082129+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:39.082313+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:40.082502+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:41.082645+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:42.082831+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:43.082955+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:44.083147+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:45.083369+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:46.083513+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:47.083655+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:48.083794+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:49.084076+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:50.084261+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:51.084393+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:52.084557+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:53.084676+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:54.084800+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:55.085213+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:56.085386+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:57.085529+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:58.085681+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:59.085855+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:00.086011+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:01.086198+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:02.086333+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:03.086465+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:04.086649+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:05.086848+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:06.086994+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:07.087125+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:08.087271+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:09.087406+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 19578880 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:10.087569+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 19578880 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:11.087689+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 19578880 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:12.087856+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 19578880 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:13.088038+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 19578880 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:14.088150+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 19578880 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:15.088304+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 19578880 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:16.088914+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 19578880 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:17.089039+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 19578880 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:18.089182+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 19578880 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:19.089363+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 19578880 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:20.089536+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 19578880 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:21.089722+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 19456000 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: do_command 'config diff' '{prefix=config diff}'
Nov 24 18:51:54 compute-0 ceph-osd[89581]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 24 18:51:54 compute-0 ceph-osd[89581]: do_command 'config show' '{prefix=config show}'
Nov 24 18:51:54 compute-0 ceph-osd[89581]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 24 18:51:54 compute-0 ceph-osd[89581]: do_command 'counter dump' '{prefix=counter dump}'
Nov 24 18:51:54 compute-0 ceph-osd[89581]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 24 18:51:54 compute-0 ceph-osd[89581]: do_command 'counter schema' '{prefix=counter schema}'
Nov 24 18:51:54 compute-0 ceph-osd[89581]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:22.089954+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:54 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:54 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:51:54 compute-0 ceph-osd[89581]: osd.1 161 ms_handle_reset con 0x560b42d4c400 session 0x560b43a1b860
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b452a2c00
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83877888 unmapped: 18923520 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:23.090119+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 18767872 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:51:54 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:24.090596+0000)
Nov 24 18:51:54 compute-0 ceph-osd[89581]: do_command 'log dump' '{prefix=log dump}'
Nov 24 18:51:54 compute-0 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 18:51:54 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Nov 24 18:51:54 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2050233863' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 24 18:51:54 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3021928101' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 24 18:51:54 compute-0 ceph-mon[74927]: pgmap v1121: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:54 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1007814410' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 24 18:51:54 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/647054694' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 24 18:51:54 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/533489741' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 24 18:51:54 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2561857792' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 24 18:51:54 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2050233863' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 24 18:51:54 compute-0 rsyslogd[1008]: imjournal from <np0005533938:ceph-osd>: begin to drop messages due to rate-limiting
Nov 24 18:51:54 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 24 18:51:54 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1151842669' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 18:51:54 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Nov 24 18:51:54 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1066158008' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 24 18:51:55 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Nov 24 18:51:55 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1524898734' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 24 18:51:55 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14861 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:51:55 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14863 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:51:55 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14865 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:51:55 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1122: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:56 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14869 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:51:56 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14868 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:51:56 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1151842669' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 18:51:56 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1066158008' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 24 18:51:56 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1524898734' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 24 18:51:56 compute-0 ceph-mon[74927]: from='client.14861 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:51:56 compute-0 ceph-mon[74927]: from='client.14863 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:51:56 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14873 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:51:57 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14875 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:51:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Nov 24 18:51:57 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3755740180' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 24 18:51:57 compute-0 ceph-mon[74927]: from='client.14865 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:51:57 compute-0 ceph-mon[74927]: pgmap v1122: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:57 compute-0 ceph-mon[74927]: from='client.14869 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:51:57 compute-0 ceph-mon[74927]: from='client.14868 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:51:57 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3755740180' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 24 18:51:57 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14879 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:51:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0) v1
Nov 24 18:51:57 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3392560238' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 24 18:51:57 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1123: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:57 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14883 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:51:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:51:58 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 24 18:51:58 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3291545781' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 76 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=74/75 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76 pruub=14.997041702s) [2] async=[2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 55'385 active pruub 173.554534912s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 76 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=74/75 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76 pruub=14.997647285s) [2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 173.554550171s@ mbc={}] exit Reset 0.000698 1 0.000126
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 76 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=74/75 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76 pruub=14.997647285s) [2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 173.554550171s@ mbc={}] enter Started
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 76 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=74/75 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76 pruub=14.997647285s) [2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 173.554550171s@ mbc={}] enter Start
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 76 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=74/75 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76 pruub=14.997647285s) [2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 173.554550171s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 76 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=74/75 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76 pruub=14.997647285s) [2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 173.554550171s@ mbc={}] exit Start 0.000014 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 76 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=74/75 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76 pruub=14.997647285s) [2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 173.554550171s@ mbc={}] enter Started/Stray
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 76 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=74/75 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76 pruub=14.996972084s) [2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 173.554534912s@ mbc={}] exit Reset 0.000107 1 0.000158
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 76 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=74/75 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76 pruub=14.996972084s) [2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 173.554534912s@ mbc={}] enter Started
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 76 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=74/75 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76 pruub=14.996972084s) [2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 173.554534912s@ mbc={}] enter Start
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 76 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=74/75 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76 pruub=14.996972084s) [2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 173.554534912s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 76 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=74/75 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76 pruub=14.996972084s) [2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 173.554534912s@ mbc={}] exit Start 0.000012 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 76 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=74/75 n=6 ec=59/49 lis/c=74/67 les/c/f=75/68/0 sis=74) [2]/[0] async=[2] r=0 lpr=74 pi=[67,74)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.960864 1 0.000154
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 76 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=74/75 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76 pruub=14.996972084s) [2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 173.554534912s@ mbc={}] enter Started/Stray
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 76 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=74/75 n=6 ec=59/49 lis/c=74/67 les/c/f=75/68/0 sis=74) [2]/[0] async=[2] r=0 lpr=74 pi=[67,74)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary/Active 1.013294 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 76 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=74/75 n=6 ec=59/49 lis/c=74/67 les/c/f=75/68/0 sis=74) [2]/[0] async=[2] r=0 lpr=74 pi=[67,74)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary 2.032266 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 76 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=74/75 n=6 ec=59/49 lis/c=74/67 les/c/f=75/68/0 sis=74) [2]/[0] async=[2] r=0 lpr=74 pi=[67,74)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started 2.032330 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 76 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=74/75 n=6 ec=59/49 lis/c=74/67 les/c/f=75/68/0 sis=74) [2]/[0] async=[2] r=0 lpr=74 pi=[67,74)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] enter Reset
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 76 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=74/75 n=6 ec=59/49 lis/c=74/67 les/c/f=75/68/0 sis=76 pruub=14.996159554s) [2] async=[2] r=-1 lpr=76 pi=[67,76)/1 crt=55'385 mlcod 55'385 active pruub 173.554031372s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 76 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=74/75 n=6 ec=59/49 lis/c=74/67 les/c/f=75/68/0 sis=76 pruub=14.996058464s) [2] r=-1 lpr=76 pi=[67,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 173.554031372s@ mbc={}] exit Reset 0.000154 1 0.000390
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 76 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=74/75 n=6 ec=59/49 lis/c=74/67 les/c/f=75/68/0 sis=76 pruub=14.996058464s) [2] r=-1 lpr=76 pi=[67,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 173.554031372s@ mbc={}] enter Started
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 76 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=74/75 n=6 ec=59/49 lis/c=74/67 les/c/f=75/68/0 sis=76 pruub=14.996058464s) [2] r=-1 lpr=76 pi=[67,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 173.554031372s@ mbc={}] enter Start
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 76 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=74/75 n=6 ec=59/49 lis/c=74/67 les/c/f=75/68/0 sis=76 pruub=14.996058464s) [2] r=-1 lpr=76 pi=[67,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 173.554031372s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 76 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=74/75 n=6 ec=59/49 lis/c=74/67 les/c/f=75/68/0 sis=76 pruub=14.996058464s) [2] r=-1 lpr=76 pi=[67,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 173.554031372s@ mbc={}] exit Start 0.000010 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 76 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=74/75 n=6 ec=59/49 lis/c=74/67 les/c/f=75/68/0 sis=76 pruub=14.996058464s) [2] r=-1 lpr=76 pi=[67,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 173.554031372s@ mbc={}] enter Started/Stray
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 950272 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:42.992444+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 950272 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 5.2 deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 76 handle_osd_map epochs [77,77], i have 76, src has [1,77]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 77 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=74/75 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.266732 6 0.000591
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 77 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=74/75 n=6 ec=59/49 lis/c=74/67 les/c/f=75/68/0 sis=76) [2] r=-1 lpr=76 pi=[67,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.266122 6 0.000313
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 77 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=74/75 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 77 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=74/75 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 77 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=74/75 n=6 ec=59/49 lis/c=74/67 les/c/f=75/68/0 sis=76) [2] r=-1 lpr=76 pi=[67,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 77 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=74/75 n=6 ec=59/49 lis/c=74/67 les/c/f=75/68/0 sis=76) [2] r=-1 lpr=76 pi=[67,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 77 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=74/75 n=6 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.266940 6 0.000547
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 77 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=74/75 n=6 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 77 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=74/75 n=6 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 77 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=74/75 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.266829 6 0.001723
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 77 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=74/75 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 77 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=74/75 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 77 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=74/75 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000569 1 0.000058
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 77 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=74/75 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: not registered w/ OSD
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 77 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=74/75 n=6 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000965 2 0.000027
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 77 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=74/75 n=6 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: not registered w/ OSD
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 77 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=74/75 n=6 ec=59/49 lis/c=74/67 les/c/f=75/68/0 sis=76) [2] r=-1 lpr=76 pi=[67,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001089 2 0.000072
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 77 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=74/75 n=6 ec=59/49 lis/c=74/67 les/c/f=75/68/0 sis=76) [2] r=-1 lpr=76 pi=[67,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: not registered w/ OSD
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 77 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=74/75 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001099 2 0.000063
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 77 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=74/75 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: not registered w/ OSD
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 77 heartbeat osd_stat(store_statfs(0x4fe0e7000/0x0/0x4ffc00000, data 0x61006/0xe5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a2f9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 5.2 deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 77 pg[9.17( v 55'385 (0'0,55'385] lb MIN local-lis/les=74/75 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=-1 lpr=76 DELETING pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.059640 3 0.000263
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 77 pg[9.17( v 55'385 (0'0,55'385] lb MIN local-lis/les=74/75 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.060346 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 77 pg[9.17( v 55'385 (0'0,55'385] lb MIN local-lis/les=74/75 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.327136 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.17] failed. State was: not registered w/ OSD
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 77 pg[9.f( v 55'385 (0'0,55'385] lb MIN local-lis/les=74/75 n=6 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=-1 lpr=76 DELETING pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.111192 2 0.000182
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 77 pg[9.f( v 55'385 (0'0,55'385] lb MIN local-lis/les=74/75 n=6 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.112207 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 77 pg[9.f( v 55'385 (0'0,55'385] lb MIN local-lis/les=74/75 n=6 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.379181 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: not registered w/ OSD
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 77 pg[9.7( v 55'385 (0'0,55'385] lb MIN local-lis/les=74/75 n=6 ec=59/49 lis/c=74/67 les/c/f=75/68/0 sis=76) [2] r=-1 lpr=76 DELETING pi=[67,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.148100 2 0.000121
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 77 pg[9.7( v 55'385 (0'0,55'385] lb MIN local-lis/les=74/75 n=6 ec=59/49 lis/c=74/67 les/c/f=75/68/0 sis=76) [2] r=-1 lpr=76 pi=[67,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.149235 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 77 pg[9.7( v 55'385 (0'0,55'385] lb MIN local-lis/les=74/75 n=6 ec=59/49 lis/c=74/67 les/c/f=75/68/0 sis=76) [2] r=-1 lpr=76 pi=[67,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.415424 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.7] failed. State was: not registered w/ OSD
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 77 pg[9.1f( v 55'385 (0'0,55'385] lb MIN local-lis/les=74/75 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=-1 lpr=76 DELETING pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.185029 2 0.000101
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 77 pg[9.1f( v 55'385 (0'0,55'385] lb MIN local-lis/les=74/75 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.186176 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 77 pg[9.1f( v 55'385 (0'0,55'385] lb MIN local-lis/les=74/75 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.453063 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: not registered w/ OSD
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:43.992577+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:23:13.763584+0000 osd.0 (osd.0) 74 : cluster [DBG] 5.2 deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:23:13.777935+0000 osd.0 (osd.0) 75 : cluster [DBG] 5.2 deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 77 heartbeat osd_stat(store_statfs(0x4fe0e7000/0x0/0x4ffc00000, data 0x61006/0xe5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a2f9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 77 handle_osd_map epochs [77,78], i have 77, src has [1,78]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 75) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:23:13.763584+0000 osd.0 (osd.0) 74 : cluster [DBG] 5.2 deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:23:13.777935+0000 osd.0 (osd.0) 75 : cluster [DBG] 5.2 deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 654989 data_alloc: 218103808 data_used: 57344
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 1941504 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:44.992754+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 78 handle_osd_map epochs [79,79], i have 78, src has [1,79]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63004672 unmapped: 1949696 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:45.992884+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:23:15.764842+0000 osd.0 (osd.0) 76 : cluster [DBG] 5.3 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:23:15.778952+0000 osd.0 (osd.0) 77 : cluster [DBG] 5.3 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 77) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:23:15.764842+0000 osd.0 (osd.0) 76 : cluster [DBG] 5.3 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:23:15.778952+0000 osd.0 (osd.0) 77 : cluster [DBG] 5.3 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 1941504 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:46.993083+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.b scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.b scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63094784 unmapped: 1859584 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:47.993242+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:23:17.690240+0000 osd.0 (osd.0) 78 : cluster [DBG] 2.b scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:23:17.704336+0000 osd.0 (osd.0) 79 : cluster [DBG] 2.b scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 79) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:23:17.690240+0000 osd.0 (osd.0) 78 : cluster [DBG] 2.b scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:23:17.704336+0000 osd.0 (osd.0) 79 : cluster [DBG] 2.b scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 1851392 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 79 handle_osd_map epochs [80,80], i have 79, src has [1,80]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 80 heartbeat osd_stat(store_statfs(0x4fe0e5000/0x0/0x4ffc00000, data 0x65e2f/0xe9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a2f9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:48.993411+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 663035 data_alloc: 218103808 data_used: 69632
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 1843200 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:49.993540+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:23:19.688752+0000 osd.0 (osd.0) 80 : cluster [DBG] 2.8 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:23:19.702865+0000 osd.0 (osd.0) 81 : cluster [DBG] 2.8 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 80 handle_osd_map epochs [80,81], i have 80, src has [1,81]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 81) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:23:19.688752+0000 osd.0 (osd.0) 80 : cluster [DBG] 2.8 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:23:19.702865+0000 osd.0 (osd.0) 81 : cluster [DBG] 2.8 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63127552 unmapped: 1826816 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:50.993766+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.16 deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.16 deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63127552 unmapped: 1826816 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:51.994015+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:23:21.691732+0000 osd.0 (osd.0) 82 : cluster [DBG] 2.16 deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:23:21.705919+0000 osd.0 (osd.0) 83 : cluster [DBG] 2.16 deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 81 handle_osd_map epochs [81,82], i have 81, src has [1,82]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.061457634s of 10.204257011s, submitted: 36
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 83) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:23:21.691732+0000 osd.0 (osd.0) 82 : cluster [DBG] 2.16 deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:23:21.705919+0000 osd.0 (osd.0) 83 : cluster [DBG] 2.16 deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 1818624 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:52.994265+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 1769472 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:53.994409+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:23:23.678579+0000 osd.0 (osd.0) 84 : cluster [DBG] 5.15 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:23:23.692627+0000 osd.0 (osd.0) 85 : cluster [DBG] 5.15 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 671702 data_alloc: 218103808 data_used: 73728
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 5.14 deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 5.14 deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 1753088 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 82 handle_osd_map epochs [82,83], i have 82, src has [1,83]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 85) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:23:23.678579+0000 osd.0 (osd.0) 84 : cluster [DBG] 5.15 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:23:23.692627+0000 osd.0 (osd.0) 85 : cluster [DBG] 5.15 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 83 heartbeat osd_stat(store_statfs(0x4fe0dc000/0x0/0x4ffc00000, data 0x6b0a6/0xf2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a2f9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:54.994653+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:23:24.705298+0000 osd.0 (osd.0) 86 : cluster [DBG] 5.14 deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:23:24.719393+0000 osd.0 (osd.0) 87 : cluster [DBG] 5.14 deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 1728512 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 87) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:23:24.705298+0000 osd.0 (osd.0) 86 : cluster [DBG] 5.14 deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:23:24.719393+0000 osd.0 (osd.0) 87 : cluster [DBG] 5.14 deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 83 handle_osd_map epochs [83,84], i have 83, src has [1,84]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:55.994883+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63275008 unmapped: 1679360 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:56.995050+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 84 handle_osd_map epochs [85,86], i have 84, src has [1,86]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63291392 unmapped: 1662976 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:57.995211+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 1654784 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:58.995350+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:23:28.678889+0000 osd.0 (osd.0) 88 : cluster [DBG] 2.13 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:23:28.692976+0000 osd.0 (osd.0) 89 : cluster [DBG] 2.13 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 688446 data_alloc: 218103808 data_used: 86016
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 1646592 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:22:59.995545+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 4 last_log 91 sent 89 num 4 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:23:29.630994+0000 osd.0 (osd.0) 90 : cluster [DBG] 2.11 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:23:29.645098+0000 osd.0 (osd.0) 91 : cluster [DBG] 2.11 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 89) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:23:28.678889+0000 osd.0 (osd.0) 88 : cluster [DBG] 2.13 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:23:28.692976+0000 osd.0 (osd.0) 89 : cluster [DBG] 2.13 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 91) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:23:29.630994+0000 osd.0 (osd.0) 90 : cluster [DBG] 2.11 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:23:29.645098+0000 osd.0 (osd.0) 91 : cluster [DBG] 2.11 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 86 handle_osd_map epochs [87,87], i have 86, src has [1,87]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 87 heartbeat osd_stat(store_statfs(0x4fe0cc000/0x0/0x4ffc00000, data 0x73701/0x101000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a2f9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 1548288 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:00.995694+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 1548288 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:01.995827+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 1540096 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:02.995946+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.710573196s of 10.931967735s, submitted: 40
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 1523712 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:03.996096+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:23:33.628843+0000 osd.0 (osd.0) 92 : cluster [DBG] 2.18 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:23:33.643151+0000 osd.0 (osd.0) 93 : cluster [DBG] 2.18 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 93) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:23:33.628843+0000 osd.0 (osd.0) 92 : cluster [DBG] 2.18 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:23:33.643151+0000 osd.0 (osd.0) 93 : cluster [DBG] 2.18 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 692434 data_alloc: 218103808 data_used: 106496
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 1515520 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:04.996274+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:23:34.672282+0000 osd.0 (osd.0) 94 : cluster [DBG] 3.17 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:23:34.686380+0000 osd.0 (osd.0) 95 : cluster [DBG] 3.17 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 95) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:23:34.672282+0000 osd.0 (osd.0) 94 : cluster [DBG] 3.17 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:23:34.686380+0000 osd.0 (osd.0) 95 : cluster [DBG] 3.17 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63471616 unmapped: 1482752 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:05.996642+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 87 heartbeat osd_stat(store_statfs(0x4fe0cd000/0x0/0x4ffc00000, data 0x73701/0x101000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a2f9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 87 handle_osd_map epochs [88,88], i have 87, src has [1,88]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 87 handle_osd_map epochs [88,88], i have 88, src has [1,88]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 1474560 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:06.996760+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 88 handle_osd_map epochs [88,89], i have 88, src has [1,89]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 1474560 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:07.997001+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 1458176 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:08.997310+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 701398 data_alloc: 218103808 data_used: 114688
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 1458176 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:09.997462+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 1449984 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:10.997666+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 1449984 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:11.997843+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 89 handle_osd_map epochs [90,90], i have 89, src has [1,90]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.f deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 91 heartbeat osd_stat(store_statfs(0x4fe0bf000/0x0/0x4ffc00000, data 0x7a4f5/0x10d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1a2f9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.f deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 1458176 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:12.997989+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:23:42.574632+0000 osd.0 (osd.0) 96 : cluster [DBG] 3.f deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:23:42.588683+0000 osd.0 (osd.0) 97 : cluster [DBG] 3.f deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 97) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:23:42.574632+0000 osd.0 (osd.0) 96 : cluster [DBG] 3.f deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:23:42.588683+0000 osd.0 (osd.0) 97 : cluster [DBG] 3.f deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 91 handle_osd_map epochs [91,92], i have 91, src has [1,92]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 92 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=67) [0] r=0 lpr=67 crt=55'385 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 46.919625 74 0.000542
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 92 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=67) [0] r=0 lpr=67 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active 46.928491 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 92 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=67) [0] r=0 lpr=67 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary 47.944112 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 92 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=67) [0] r=0 lpr=67 crt=55'385 mlcod 0'0 active mbc={}] exit Started 47.944224 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 92 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=67) [0] r=0 lpr=67 crt=55'385 mlcod 0'0 active mbc={}] enter Reset
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 92 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92 pruub=9.081287384s) [2] r=-1 lpr=92 pi=[67,92)/1 crt=55'385 mlcod 0'0 active pruub 198.269866943s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 92 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92 pruub=9.081098557s) [2] r=-1 lpr=92 pi=[67,92)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 198.269866943s@ mbc={}] exit Reset 0.000228 1 0.000321
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 92 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92 pruub=9.081098557s) [2] r=-1 lpr=92 pi=[67,92)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 198.269866943s@ mbc={}] enter Started
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 92 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92 pruub=9.081098557s) [2] r=-1 lpr=92 pi=[67,92)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 198.269866943s@ mbc={}] enter Start
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 92 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92 pruub=9.081098557s) [2] r=-1 lpr=92 pi=[67,92)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 198.269866943s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 92 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92 pruub=9.081098557s) [2] r=-1 lpr=92 pi=[67,92)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 198.269866943s@ mbc={}] exit Start 0.000085 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 92 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92 pruub=9.081098557s) [2] r=-1 lpr=92 pi=[67,92)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 198.269866943s@ mbc={}] enter Started/Stray
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 92 handle_osd_map epochs [92,92], i have 92, src has [1,92]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 1449984 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:13.998250+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 92 handle_osd_map epochs [92,93], i have 92, src has [1,93]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.453237534s of 10.508935928s, submitted: 16
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 93 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=-1 lpr=92 pi=[67,92)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.014058 3 0.000195
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 93 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=-1 lpr=92 pi=[67,92)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.014203 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 93 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=-1 lpr=92 pi=[67,92)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 93 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] r=0 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 93 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] r=0 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 0'0 remapped mbc={}] exit Reset 0.000083 1 0.000117
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 93 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] r=0 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 93 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] r=0 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Start
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 93 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] r=0 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 93 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] r=0 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 0'0 remapped mbc={}] exit Start 0.000007 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 93 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] r=0 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 93 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] r=0 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 93 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] r=0 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 93 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] r=0 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000924 2 0.000044
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 93 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] r=0 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 93 handle_osd_map epochs [93,93], i have 93, src has [1,93]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 93 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] async=[2] r=0 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000029 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 93 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] async=[2] r=0 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 93 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] async=[2] r=0 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 93 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] async=[2] r=0 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 713761 data_alloc: 218103808 data_used: 114688
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 1425408 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:14.998371+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 93 handle_osd_map epochs [93,94], i have 93, src has [1,94]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 93 handle_osd_map epochs [94,94], i have 94, src has [1,94]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 94 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] async=[2] r=0 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005530 3 0.000111
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 94 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] async=[2] r=0 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.006626 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 94 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] async=[2] r=0 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 94 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] async=[2] r=0 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 94 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] async=[2] r=0 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 94 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=93) [2]/[0] async=[2] r=0 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.007609 5 0.001106
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 94 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=93) [2]/[0] async=[2] r=0 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 94 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=93) [2]/[0] async=[2] r=0 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000204 1 0.000058
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 94 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=93) [2]/[0] async=[2] r=0 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 94 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=93) [2]/[0] async=[2] r=0 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000488 1 0.000033
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 94 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=93) [2]/[0] async=[2] r=0 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 94 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=93) [2]/[0] async=[2] r=0 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.039240 2 0.000064
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 94 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=93) [2]/[0] async=[2] r=0 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 94 heartbeat osd_stat(store_statfs(0x4fcf1a000/0x0/0x4ffc00000, data 0x7dad7/0x113000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2bcf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.c scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.c scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63569920 unmapped: 1384448 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:15.998566+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:23:45.525864+0000 osd.0 (osd.0) 98 : cluster [DBG] 3.c scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:23:45.539996+0000 osd.0 (osd.0) 99 : cluster [DBG] 3.c scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 99) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:23:45.525864+0000 osd.0 (osd.0) 98 : cluster [DBG] 3.c scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:23:45.539996+0000 osd.0 (osd.0) 99 : cluster [DBG] 3.c scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 94 handle_osd_map epochs [95,95], i have 95, src has [1,95]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=93) [2]/[0] async=[2] r=0 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.983534 1 0.000116
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=93) [2]/[0] async=[2] r=0 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary/Active 1.031347 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=93) [2]/[0] async=[2] r=0 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary 2.038019 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=93) [2]/[0] async=[2] r=0 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started 2.038051 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=93) [2]/[0] async=[2] r=0 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] enter Reset
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95 pruub=14.975742340s) [2] async=[2] r=-1 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 55'385 active pruub 207.216949463s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95 pruub=14.974552155s) [2] r=-1 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 207.216949463s@ mbc={}] exit Reset 0.001256 1 0.001346
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95 pruub=14.974552155s) [2] r=-1 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 207.216949463s@ mbc={}] enter Started
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95 pruub=14.974552155s) [2] r=-1 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 207.216949463s@ mbc={}] enter Start
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95 pruub=14.974552155s) [2] r=-1 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 207.216949463s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95 pruub=14.974552155s) [2] r=-1 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 207.216949463s@ mbc={}] exit Start 0.000314 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95 pruub=14.974552155s) [2] r=-1 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 207.216949463s@ mbc={}] enter Started/Stray
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.f scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.f scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 1327104 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:16.998827+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:23:46.549348+0000 osd.0 (osd.0) 100 : cluster [DBG] 7.f scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:23:46.563447+0000 osd.0 (osd.0) 101 : cluster [DBG] 7.f scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 95 handle_osd_map epochs [95,96], i have 95, src has [1,96]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 101) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:23:46.549348+0000 osd.0 (osd.0) 100 : cluster [DBG] 7.f scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:23:46.563447+0000 osd.0 (osd.0) 101 : cluster [DBG] 7.f scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 96 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=-1 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.020468 7 0.000640
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 96 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=-1 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 96 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=-1 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 96 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=-1 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000109 1 0.000063
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 96 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=-1 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: not registered w/ OSD
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 96 pg[9.13( v 55'385 (0'0,55'385] lb MIN local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=-1 lpr=95 DELETING pi=[67,95)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.040804 2 0.000247
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 96 pg[9.13( v 55'385 (0'0,55'385] lb MIN local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=-1 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.041003 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 96 pg[9.13( v 55'385 (0'0,55'385] lb MIN local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=-1 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.061932 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: not registered w/ OSD
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63676416 unmapped: 1277952 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:17.999081+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 1269760 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 96 heartbeat osd_stat(store_statfs(0x4fcf12000/0x0/0x4ffc00000, data 0x82930/0x11b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2bcf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:18.999220+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:23:48.476675+0000 osd.0 (osd.0) 102 : cluster [DBG] 7.6 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:23:48.490705+0000 osd.0 (osd.0) 103 : cluster [DBG] 7.6 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 96 handle_osd_map epochs [96,97], i have 96, src has [1,97]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 103) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:23:48.476675+0000 osd.0 (osd.0) 102 : cluster [DBG] 7.6 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:23:48.490705+0000 osd.0 (osd.0) 103 : cluster [DBG] 7.6 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 721998 data_alloc: 218103808 data_used: 118784
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 1245184 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:19.999402+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:23:49.473564+0000 osd.0 (osd.0) 104 : cluster [DBG] 7.4 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:23:49.487650+0000 osd.0 (osd.0) 105 : cluster [DBG] 7.4 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 105) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:23:49.473564+0000 osd.0 (osd.0) 104 : cluster [DBG] 7.4 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:23:49.487650+0000 osd.0 (osd.0) 105 : cluster [DBG] 7.4 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 1245184 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:20.999570+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63717376 unmapped: 1236992 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:21.999696+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63717376 unmapped: 1236992 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:22.999824+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 97 handle_osd_map epochs [98,99], i have 97, src has [1,99]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 99 pg[9.16(unlocked)] enter Initial
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 99 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99) [0] r=0 lpr=0 pi=[75,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000058 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 99 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99) [0] r=0 lpr=0 pi=[75,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 99 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99) [0] r=0 lpr=99 pi=[75,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000014 1 0.000029
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 99 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99) [0] r=0 lpr=99 pi=[75,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 99 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99) [0] r=0 lpr=99 pi=[75,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 99 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99) [0] r=0 lpr=99 pi=[75,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 99 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99) [0] r=0 lpr=99 pi=[75,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 99 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99) [0] r=0 lpr=99 pi=[75,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 99 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99) [0] r=0 lpr=99 pi=[75,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 99 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99) [0] r=0 lpr=99 pi=[75,99)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 99 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99) [0] r=0 lpr=99 pi=[75,99)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000202 1 0.000065
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 99 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99) [0] r=0 lpr=99 pi=[75,99)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 98 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=67) [0] r=0 lpr=67 crt=55'385 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 57.028476 94 0.000484
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 98 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=67) [0] r=0 lpr=67 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active 57.037998 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 98 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=67) [0] r=0 lpr=67 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary 58.054592 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 99 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99) [0] r=0 lpr=99 pi=[75,99)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000344 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 99 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99) [0] r=0 lpr=99 pi=[75,99)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000581 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 98 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=67) [0] r=0 lpr=67 crt=55'385 mlcod 0'0 active mbc={}] exit Started 58.054615 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 98 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=67) [0] r=0 lpr=67 crt=55'385 mlcod 0'0 active mbc={}] enter Reset
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 99 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99) [0] r=0 lpr=99 pi=[75,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 98 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98 pruub=14.972748756s) [1] r=-1 lpr=98 pi=[67,98)/1 crt=55'385 mlcod 0'0 active pruub 214.270065308s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 99 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98 pruub=14.972686768s) [1] r=-1 lpr=98 pi=[67,98)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 214.270065308s@ mbc={}] exit Reset 0.000092 2 0.000255
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 99 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98 pruub=14.972686768s) [1] r=-1 lpr=98 pi=[67,98)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 214.270065308s@ mbc={}] enter Started
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 99 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98 pruub=14.972686768s) [1] r=-1 lpr=98 pi=[67,98)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 214.270065308s@ mbc={}] enter Start
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 99 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98 pruub=14.972686768s) [1] r=-1 lpr=98 pi=[67,98)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 214.270065308s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 99 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98 pruub=14.972686768s) [1] r=-1 lpr=98 pi=[67,98)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 214.270065308s@ mbc={}] exit Start 0.000012 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 99 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98 pruub=14.972686768s) [1] r=-1 lpr=98 pi=[67,98)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 214.270065308s@ mbc={}] enter Started/Stray
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 99 handle_osd_map epochs [98,99], i have 99, src has [1,99]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.3 deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.3 deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63741952 unmapped: 1212416 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:23.999969+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:23:53.507277+0000 osd.0 (osd.0) 106 : cluster [DBG] 3.3 deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:23:53.521392+0000 osd.0 (osd.0) 107 : cluster [DBG] 3.3 deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 107) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:23:53.507277+0000 osd.0 (osd.0) 106 : cluster [DBG] 3.3 deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:23:53.521392+0000 osd.0 (osd.0) 107 : cluster [DBG] 3.3 deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 99 handle_osd_map epochs [99,100], i have 99, src has [1,100]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.989005089s of 10.113665581s, submitted: 45
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 99 handle_osd_map epochs [100,100], i have 100, src has [1,100]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 100 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=-1 lpr=98 pi=[67,98)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.017960 3 0.000060
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 100 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=-1 lpr=98 pi=[67,98)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.018000 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 100 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=-1 lpr=98 pi=[67,98)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 100 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] r=0 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 100 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] r=0 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 0'0 remapped mbc={}] exit Reset 0.000063 1 0.000086
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 100 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] r=0 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 100 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] r=0 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Start
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 100 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] r=0 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 100 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] r=0 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 100 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] r=0 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 100 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] r=0 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 100 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] r=0 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 100 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] r=0 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000029 1 0.000033
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 100 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] r=0 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 100 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] async=[1] r=0 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000022 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 100 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] async=[1] r=0 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 100 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] async=[1] r=0 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 100 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] async=[1] r=0 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 100 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99) [0] r=0 lpr=99 pi=[75,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.019602 2 0.000428
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 100 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99) [0] r=0 lpr=99 pi=[75,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.020262 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 100 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99) [0] r=0 lpr=99 pi=[75,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.020287 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 100 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99) [0] r=0 lpr=99 pi=[75,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 100 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] r=-1 lpr=100 pi=[75,100)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 100 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] r=-1 lpr=100 pi=[75,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000082 1 0.000120
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 100 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] r=-1 lpr=100 pi=[75,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 100 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] r=-1 lpr=100 pi=[75,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 100 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] r=-1 lpr=100 pi=[75,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 100 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] r=-1 lpr=100 pi=[75,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000007 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 100 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] r=-1 lpr=100 pi=[75,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 100 handle_osd_map epochs [100,100], i have 100, src has [1,100]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 735092 data_alloc: 218103808 data_used: 118784
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63750144 unmapped: 1204224 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 100 heartbeat osd_stat(store_statfs(0x4fcf05000/0x0/0x4ffc00000, data 0x89646/0x127000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2bcf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:25.000159+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:23:54.499340+0000 osd.0 (osd.0) 108 : cluster [DBG] 3.6 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:23:54.513431+0000 osd.0 (osd.0) 109 : cluster [DBG] 3.6 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 109) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:23:54.499340+0000 osd.0 (osd.0) 108 : cluster [DBG] 3.6 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:23:54.513431+0000 osd.0 (osd.0) 109 : cluster [DBG] 3.6 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 100 handle_osd_map epochs [100,101], i have 100, src has [1,101]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 100 handle_osd_map epochs [101,101], i have 101, src has [1,101]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 101 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] async=[1] r=0 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.006896 4 0.000061
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 101 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] async=[1] r=0 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.007000 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 101 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] async=[1] r=0 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 101 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] async=[1] r=0 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Nov 24 18:51:58 compute-0 ceph-osd[88544]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 101 pg[9.16( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] r=-1 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.008588 6 0.000162
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 101 pg[9.16( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] r=-1 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 101 pg[9.16( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] r=-1 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 101 handle_osd_map epochs [101,101], i have 101, src has [1,101]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: not registered w/ OSD
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 101 pg[9.16( v 55'385 lc 55'54 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=100) [0]/[2] r=-1 lpr=100 pi=[75,100)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.012644 3 0.000689
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 101 pg[9.16( v 55'385 lc 55'54 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=100) [0]/[2] r=-1 lpr=100 pi=[75,100)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 101 pg[9.16( v 55'385 lc 55'54 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=100) [0]/[2] r=-1 lpr=100 pi=[75,100)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000065 1 0.000050
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 101 pg[9.16( v 55'385 lc 55'54 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=100) [0]/[2] r=-1 lpr=100 pi=[75,100)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 101 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] async=[1] r=0 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 101 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=100) [1]/[0] async=[1] r=0 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.027709 5 0.000329
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 101 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=100) [1]/[0] async=[1] r=0 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 101 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=100) [1]/[0] async=[1] r=0 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000109 1 0.000086
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 101 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=100) [1]/[0] async=[1] r=0 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 101 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=100) [1]/[0] async=[1] r=0 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000528 1 0.000050
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 101 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=100) [1]/[0] async=[1] r=0 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 101 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=100) [0]/[2] r=-1 lpr=100 pi=[75,100)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.060035 1 0.000023
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 101 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=100) [0]/[2] r=-1 lpr=100 pi=[75,100)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 101 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=100) [1]/[0] async=[1] r=0 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.054124 2 0.000056
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 101 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=100) [1]/[0] async=[1] r=0 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63766528 unmapped: 1187840 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 101 heartbeat osd_stat(store_statfs(0x4fcf00000/0x0/0x4ffc00000, data 0x8b121/0x12b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2bcf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:26.000316+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 101 handle_osd_map epochs [101,102], i have 101, src has [1,102]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 101 handle_osd_map epochs [102,102], i have 102, src has [1,102]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=100) [1]/[0] async=[1] r=0 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.924766 1 0.000116
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=100) [1]/[0] async=[1] r=0 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary/Active 1.007535 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=100) [1]/[0] async=[1] r=0 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary 2.014557 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=100) [1]/[0] async=[1] r=0 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started 2.014582 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=100) [1]/[0] async=[1] r=0 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] enter Reset
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102 pruub=15.020028114s) [1] async=[1] r=-1 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 55'385 active pruub 217.350143433s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102 pruub=15.019754410s) [1] r=-1 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 217.350143433s@ mbc={}] exit Reset 0.000331 1 0.000398
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102 pruub=15.019754410s) [1] r=-1 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 217.350143433s@ mbc={}] enter Started
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102 pruub=15.019754410s) [1] r=-1 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 217.350143433s@ mbc={}] enter Start
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102 pruub=15.019754410s) [1] r=-1 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 217.350143433s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102 pruub=15.019754410s) [1] r=-1 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 217.350143433s@ mbc={}] exit Start 0.000090 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102 pruub=15.019754410s) [1] r=-1 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 217.350143433s@ mbc={}] enter Started/Stray
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=100) [0]/[2] r=-1 lpr=100 pi=[75,100)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.932631 1 0.000075
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=100) [0]/[2] r=-1 lpr=100 pi=[75,100)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.005484 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=100) [0]/[2] r=-1 lpr=100 pi=[75,100)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started 2.014315 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=100) [0]/[2] r=-1 lpr=100 pi=[75,100)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 unknown mbc={}] exit Reset 0.000155 1 0.000322
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Start
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 unknown mbc={}] exit Start 0.000056 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000907 2 0.000268
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 102 handle_osd_map epochs [102,102], i have 102, src has [1,102]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: merge_log_dups log.dups.size()=0olog.dups.size()=9
Nov 24 18:51:58 compute-0 ceph-osd[88544]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=9
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000400 2 0.000173
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000026 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 102 handle_osd_map epochs [102,102], i have 102, src has [1,102]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 1114112 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:27.000471+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 102 handle_osd_map epochs [103,103], i have 102, src has [1,103]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 103 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.016285 2 0.000202
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 103 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.018083 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 103 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 103 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=102/103 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 103 handle_osd_map epochs [103,103], i have 103, src has [1,103]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 103 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=102/103 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 103 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=102/103 n=5 ec=59/49 lis/c=102/75 les/c/f=103/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005507 4 0.000527
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 103 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=102/103 n=5 ec=59/49 lis/c=102/75 les/c/f=103/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 103 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=102/103 n=5 ec=59/49 lis/c=102/75 les/c/f=103/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000017 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 103 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=102/103 n=5 ec=59/49 lis/c=102/75 les/c/f=103/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 103 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=-1 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.047861 7 0.000323
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 103 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=-1 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 103 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=-1 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 103 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=-1 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000114 1 0.000088
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 103 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=-1 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 103 pg[9.15( v 55'385 (0'0,55'385] lb MIN local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=-1 lpr=102 DELETING pi=[67,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.040641 2 0.000201
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 103 pg[9.15( v 55'385 (0'0,55'385] lb MIN local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=-1 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.040816 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 103 pg[9.15( v 55'385 (0'0,55'385] lb MIN local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=-1 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.088872 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63905792 unmapped: 1048576 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:28.000640+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63905792 unmapped: 1048576 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:29.000784+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 103 heartbeat osd_stat(store_statfs(0x4fcefc000/0x0/0x4ffc00000, data 0x8e553/0x130000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2bcf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 741735 data_alloc: 218103808 data_used: 118784
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 1040384 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 103 heartbeat osd_stat(store_statfs(0x4fcefc000/0x0/0x4ffc00000, data 0x8e553/0x130000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2bcf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:30.000992+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:23:59.503108+0000 osd.0 (osd.0) 110 : cluster [DBG] 7.3 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:23:59.517185+0000 osd.0 (osd.0) 111 : cluster [DBG] 7.3 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 111) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:23:59.503108+0000 osd.0 (osd.0) 110 : cluster [DBG] 7.3 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:23:59.517185+0000 osd.0 (osd.0) 111 : cluster [DBG] 7.3 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab28b39800
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 1015808 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:31.001160+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 103 handle_osd_map epochs [103,104], i have 103, src has [1,104]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 999424 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:32.001313+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 999424 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 104 heartbeat osd_stat(store_statfs(0x4fcefa000/0x0/0x4ffc00000, data 0x900d0/0x133000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2bcf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:33.001484+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 999424 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:34.001623+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 104 heartbeat osd_stat(store_statfs(0x4fcefa000/0x0/0x4ffc00000, data 0x900d0/0x133000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2bcf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 104 handle_osd_map epochs [105,105], i have 105, src has [1,105]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.162210464s of 10.324795723s, submitted: 32
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 748883 data_alloc: 218103808 data_used: 126976
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 64954368 unmapped: 1048576 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:35.001792+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 106 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=67) [0] r=0 lpr=67 crt=55'385 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 69.347420 117 0.000405
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 106 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=67) [0] r=0 lpr=67 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active 69.356871 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 106 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=67) [0] r=0 lpr=67 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary 70.372921 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 106 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=67) [0] r=0 lpr=67 crt=55'385 mlcod 0'0 active mbc={}] exit Started 70.372967 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 106 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=67) [0] r=0 lpr=67 crt=55'385 mlcod 0'0 active mbc={}] enter Reset
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 106 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106 pruub=10.653602600s) [2] r=-1 lpr=106 pi=[67,106)/1 crt=55'385 mlcod 0'0 active pruub 222.269836426s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 106 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106 pruub=10.653409958s) [2] r=-1 lpr=106 pi=[67,106)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 222.269836426s@ mbc={}] exit Reset 0.000258 1 0.000394
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 106 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106 pruub=10.653409958s) [2] r=-1 lpr=106 pi=[67,106)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 222.269836426s@ mbc={}] enter Started
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 106 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106 pruub=10.653409958s) [2] r=-1 lpr=106 pi=[67,106)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 222.269836426s@ mbc={}] enter Start
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 106 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106 pruub=10.653409958s) [2] r=-1 lpr=106 pi=[67,106)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 222.269836426s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 106 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106 pruub=10.653409958s) [2] r=-1 lpr=106 pi=[67,106)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 222.269836426s@ mbc={}] exit Start 0.000088 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 106 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106 pruub=10.653409958s) [2] r=-1 lpr=106 pi=[67,106)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 222.269836426s@ mbc={}] enter Started/Stray
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 106 handle_osd_map epochs [106,106], i have 106, src has [1,106]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 64962560 unmapped: 1040384 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:36.001941+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 107 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=-1 lpr=106 pi=[67,106)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.836899 3 0.000294
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 107 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=-1 lpr=106 pi=[67,106)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.837127 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 107 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=-1 lpr=106 pi=[67,106)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 107 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] r=0 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 107 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] r=0 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 0'0 remapped mbc={}] exit Reset 0.000324 1 0.000422
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 107 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] r=0 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 107 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] r=0 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Start
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 107 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] r=0 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 107 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] r=0 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 0'0 remapped mbc={}] exit Start 0.000082 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 107 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] r=0 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 107 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] r=0 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 107 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] r=0 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 107 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] r=0 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000109 1 0.000276
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 107 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] r=0 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 107 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] async=[2] r=0 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000061 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 107 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] async=[2] r=0 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 107 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] async=[2] r=0 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000029 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 107 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] async=[2] r=0 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 64962560 unmapped: 1040384 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:37.002046+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 107 handle_osd_map epochs [107,108], i have 107, src has [1,108]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 107 handle_osd_map epochs [107,108], i have 108, src has [1,108]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 108 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] async=[2] r=0 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996993 4 0.000227
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 108 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] async=[2] r=0 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.997353 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 108 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] async=[2] r=0 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 108 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] async=[2] r=0 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 108 handle_osd_map epochs [108,108], i have 108, src has [1,108]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 108 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] async=[2] r=0 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 108 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=107) [2]/[0] async=[2] r=0 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.459248 5 0.000664
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 108 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=107) [2]/[0] async=[2] r=0 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 108 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=107) [2]/[0] async=[2] r=0 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000181 1 0.000101
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 108 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=107) [2]/[0] async=[2] r=0 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 108 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=107) [2]/[0] async=[2] r=0 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000463 1 0.000078
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 108 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=107) [2]/[0] async=[2] r=0 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65003520 unmapped: 999424 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 108 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=107) [2]/[0] async=[2] r=0 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.061001 2 0.000038
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 108 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=107) [2]/[0] async=[2] r=0 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:38.002220+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 108 handle_osd_map epochs [108,109], i have 108, src has [1,109]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 108 handle_osd_map epochs [109,109], i have 109, src has [1,109]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=107) [2]/[0] async=[2] r=0 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.480487 1 0.000188
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=107) [2]/[0] async=[2] r=0 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary/Active 1.002000 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=107) [2]/[0] async=[2] r=0 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary 1.999399 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=107) [2]/[0] async=[2] r=0 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started 1.999547 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=107) [2]/[0] async=[2] r=0 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] enter Reset
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109 pruub=15.457493782s) [2] async=[2] r=-1 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 55'385 active pruub 229.911041260s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109 pruub=15.457426071s) [2] r=-1 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 229.911041260s@ mbc={}] exit Reset 0.000101 1 0.000141
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109 pruub=15.457426071s) [2] r=-1 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 229.911041260s@ mbc={}] enter Started
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109 pruub=15.457426071s) [2] r=-1 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 229.911041260s@ mbc={}] enter Start
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109 pruub=15.457426071s) [2] r=-1 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 229.911041260s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109 pruub=15.457426071s) [2] r=-1 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 229.911041260s@ mbc={}] exit Start 0.000007 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109 pruub=15.457426071s) [2] r=-1 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 229.911041260s@ mbc={}] enter Started/Stray
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 109 handle_osd_map epochs [109,109], i have 109, src has [1,109]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fceeb000/0x0/0x4ffc00000, data 0x98815/0x142000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2bcf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65044480 unmapped: 958464 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:39.002360+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 109 handle_osd_map epochs [110,110], i have 109, src has [1,110]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 110 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=-1 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.056506 6 0.000177
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 110 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=-1 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 110 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=-1 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 110 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=-1 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001239 2 0.000110
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 110 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=-1 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 110 pg[9.19( v 55'385 (0'0,55'385] lb MIN local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=-1 lpr=109 DELETING pi=[67,109)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.060356 2 0.000244
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 110 pg[9.19( v 55'385 (0'0,55'385] lb MIN local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=-1 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.061677 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 110 pg[9.19( v 55'385 (0'0,55'385] lb MIN local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=-1 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.118245 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 752324 data_alloc: 218103808 data_used: 126976
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65044480 unmapped: 958464 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:40.002508+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.13 deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.13 deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65044480 unmapped: 958464 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:41.002676+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:10.443992+0000 osd.0 (osd.0) 112 : cluster [DBG] 7.13 deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:10.458089+0000 osd.0 (osd.0) 113 : cluster [DBG] 7.13 deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 110 handle_osd_map epochs [110,111], i have 110, src has [1,111]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 113) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:10.443992+0000 osd.0 (osd.0) 112 : cluster [DBG] 7.13 deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:10.458089+0000 osd.0 (osd.0) 113 : cluster [DBG] 7.13 deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65052672 unmapped: 950272 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:42.002858+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65052672 unmapped: 950272 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:43.003007+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65069056 unmapped: 933888 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 111 pg[9.1c(unlocked)] enter Initial
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 111 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111) [0] r=0 lpr=0 pi=[86,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000104 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 111 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111) [0] r=0 lpr=0 pi=[86,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 111 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111) [0] r=0 lpr=111 pi=[86,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000037
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 111 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111) [0] r=0 lpr=111 pi=[86,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 111 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111) [0] r=0 lpr=111 pi=[86,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 111 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111) [0] r=0 lpr=111 pi=[86,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 111 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111) [0] r=0 lpr=111 pi=[86,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 111 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111) [0] r=0 lpr=111 pi=[86,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 111 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111) [0] r=0 lpr=111 pi=[86,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 111 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111) [0] r=0 lpr=111 pi=[86,111)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 111 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111) [0] r=0 lpr=111 pi=[86,111)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000151 1 0.000053
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 111 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111) [0] r=0 lpr=111 pi=[86,111)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 111 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111) [0] r=0 lpr=111 pi=[86,111)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000036 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 111 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111) [0] r=0 lpr=111 pi=[86,111)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000252 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 111 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111) [0] r=0 lpr=111 pi=[86,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 111 handle_osd_map epochs [112,112], i have 111, src has [1,112]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 112 heartbeat osd_stat(store_statfs(0x4fcee6000/0x0/0x4ffc00000, data 0x9bea9/0x147000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2bcf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:44.003127+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 112 handle_osd_map epochs [111,113], i have 112, src has [1,113]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 113 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111) [0] r=0 lpr=111 pi=[86,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.509213 6 0.000146
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 113 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111) [0] r=0 lpr=111 pi=[86,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.509501 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 113 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111) [0] r=0 lpr=111 pi=[86,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.509528 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 113 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111) [0] r=0 lpr=111 pi=[86,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 113 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[86,113)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 113 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[86,113)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000731 1 0.000199
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 113 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[86,113)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 113 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[86,113)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 113 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[86,113)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 113 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[86,113)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000007 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 113 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[86,113)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 113 handle_osd_map epochs [113,113], i have 113, src has [1,113]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 763748 data_alloc: 218103808 data_used: 126976
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65077248 unmapped: 925696 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:45.003329+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 113 handle_osd_map epochs [114,114], i have 113, src has [1,114]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.774274826s of 10.907894135s, submitted: 68
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 114 pg[9.1e(unlocked)] enter Initial
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=0 lpr=0 pi=[75,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000064 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=0 lpr=0 pi=[75,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=0 lpr=114 pi=[75,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000027
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=0 lpr=114 pi=[75,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=0 lpr=114 pi=[75,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=0 lpr=114 pi=[75,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=0 lpr=114 pi=[75,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=0 lpr=114 pi=[75,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=0 lpr=114 pi=[75,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=0 lpr=114 pi=[75,114)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=0 lpr=114 pi=[75,114)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000203 1 0.000052
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=0 lpr=114 pi=[75,114)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=0 lpr=114 pi=[75,114)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000033 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=0 lpr=114 pi=[75,114)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000260 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=0 lpr=114 pi=[75,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 114 pg[9.1c( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.019644 5 0.000770
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 114 pg[9.1c( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 114 pg[9.1c( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 114 pg[9.1c( v 55'385 lc 55'136 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[86,113)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.006689 4 0.001010
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 114 pg[9.1c( v 55'385 lc 55'136 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[86,113)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 114 pg[9.1c( v 55'385 lc 55'136 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[86,113)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000096 1 0.000082
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 114 pg[9.1c( v 55'385 lc 55'136 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[86,113)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 114 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[86,113)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.059679 1 0.000036
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 114 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[86,113)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 860160 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:46.003462+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcedf000/0x0/0x4ffc00000, data 0x9f48b/0x14d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2bcf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 114 handle_osd_map epochs [115,115], i have 114, src has [1,115]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 114 handle_osd_map epochs [114,115], i have 115, src has [1,115]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 115 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=0 lpr=114 pi=[75,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.021164 2 0.000067
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 115 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=0 lpr=114 pi=[75,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.021464 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 115 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=0 lpr=114 pi=[75,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.021491 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[86,113)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.953575 1 0.000054
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 115 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=0 lpr=114 pi=[75,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[86,113)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.020203 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[86,113)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started 2.039920 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[86,113)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 115 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=0 lpr=115 pi=[86,115)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 115 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000119 1 0.000168
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 115 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 115 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 115 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 115 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000006 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 115 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=0 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 unknown mbc={}] exit Reset 0.000099 1 0.000198
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=0 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=0 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Start
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=0 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=0 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=0 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=0 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=0 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=0 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000056 1 0.000057
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=0 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: merge_log_dups log.dups.size()=0olog.dups.size()=15
Nov 24 18:51:58 compute-0 ceph-osd[88544]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=15
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=0 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001126 3 0.000060
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=0 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=0 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=0 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 851968 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:47.003602+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fceda000/0x0/0x4ffc00000, data 0xa1097/0x151000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2bcf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.a scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.a scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 115 handle_osd_map epochs [115,116], i have 115, src has [1,116]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 115 handle_osd_map epochs [116,116], i have 116, src has [1,116]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 116 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=0 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.990028 2 0.000063
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 116 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=0 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.991273 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 116 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=0 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 116 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=0 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 116 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=0 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 116 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/86 les/c/f=116/87/0 sis=115) [0] r=0 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005807 3 0.000150
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 116 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/86 les/c/f=116/87/0 sis=115) [0] r=0 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 116 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/86 les/c/f=116/87/0 sis=115) [0] r=0 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000017 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 116 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/86 les/c/f=116/87/0 sis=115) [0] r=0 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 24 18:51:58 compute-0 ceph-osd[88544]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 116 pg[9.1e( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 0.998291 6 0.000041
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 116 pg[9.1e( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 116 pg[9.1e( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 116 handle_osd_map epochs [116,116], i have 116, src has [1,116]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: not registered w/ OSD
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 116 pg[9.1e( v 55'385 lc 55'220 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.006128 3 0.000114
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 116 pg[9.1e( v 55'385 lc 55'220 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 116 pg[9.1e( v 55'385 lc 55'220 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000050 1 0.000041
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 116 pg[9.1e( v 55'385 lc 55'220 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 116 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.095763 1 0.000033
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 116 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 753664 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:48.003779+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:17.403067+0000 osd.0 (osd.0) 114 : cluster [DBG] 3.a scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:17.417202+0000 osd.0 (osd.0) 115 : cluster [DBG] 3.a scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 115) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:17.403067+0000 osd.0 (osd.0) 114 : cluster [DBG] 3.a scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:17.417202+0000 osd.0 (osd.0) 115 : cluster [DBG] 3.a scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 116 handle_osd_map epochs [117,117], i have 116, src has [1,117]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.923852 1 0.000045
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.025894 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started 2.024216 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 unknown mbc={}] exit Reset 0.000154 1 0.000199
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Start
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 unknown mbc={}] exit Start 0.000042 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000053 1 0.000154
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: merge_log_dups log.dups.size()=0olog.dups.size()=10
Nov 24 18:51:58 compute-0 ceph-osd[88544]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=10
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001073 3 0.000102
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000011 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 753664 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:49.003988+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fced6000/0x0/0x4ffc00000, data 0xa46bf/0x157000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2bcf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.1b deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.1b deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 117 handle_osd_map epochs [117,118], i have 117, src has [1,118]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 117 handle_osd_map epochs [118,118], i have 118, src has [1,118]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 118 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.008445 2 0.000092
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 118 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.009699 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 118 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 118 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 118 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 118 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/75 les/c/f=118/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004473 4 0.000152
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 118 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/75 les/c/f=118/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 118 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/75 les/c/f=118/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 pg_epoch: 118 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/75 les/c/f=118/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 801596 data_alloc: 218103808 data_used: 135168
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 737280 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:50.004203+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:19.315229+0000 osd.0 (osd.0) 116 : cluster [DBG] 3.1b deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:19.328449+0000 osd.0 (osd.0) 117 : cluster [DBG] 3.1b deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 117) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:19.315229+0000 osd.0 (osd.0) 116 : cluster [DBG] 3.1b deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:19.328449+0000 osd.0 (osd.0) 117 : cluster [DBG] 3.1b deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 118 handle_osd_map epochs [118,119], i have 118, src has [1,119]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 761856 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:51.004371+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 119 handle_osd_map epochs [119,120], i have 119, src has [1,120]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 704512 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:52.004612+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.9 deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.9 deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 688128 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:53.004793+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 119 sent 117 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:22.307681+0000 osd.0 (osd.0) 118 : cluster [DBG] 7.9 deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:22.321945+0000 osd.0 (osd.0) 119 : cluster [DBG] 7.9 deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcec9000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2bcf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 119) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:22.307681+0000 osd.0 (osd.0) 118 : cluster [DBG] 7.9 deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:22.321945+0000 osd.0 (osd.0) 119 : cluster [DBG] 7.9 deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 647168 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:54.005006+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:23.357414+0000 osd.0 (osd.0) 120 : cluster [DBG] 7.1b scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:23.371485+0000 osd.0 (osd.0) 121 : cluster [DBG] 7.1b scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 121) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:23.357414+0000 osd.0 (osd.0) 120 : cluster [DBG] 7.1b scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:23.371485+0000 osd.0 (osd.0) 121 : cluster [DBG] 7.1b scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 808763 data_alloc: 218103808 data_used: 135168
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65363968 unmapped: 638976 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:55.005204+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:24.325281+0000 osd.0 (osd.0) 122 : cluster [DBG] 7.18 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:24.339335+0000 osd.0 (osd.0) 123 : cluster [DBG] 7.18 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 123) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:24.325281+0000 osd.0 (osd.0) 122 : cluster [DBG] 7.18 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:24.339335+0000 osd.0 (osd.0) 123 : cluster [DBG] 7.18 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65363968 unmapped: 638976 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:56.005443+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.672332764s of 10.857107162s, submitted: 50
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 630784 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:57.005599+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:26.341361+0000 osd.0 (osd.0) 124 : cluster [DBG] 7.1f scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:26.355455+0000 osd.0 (osd.0) 125 : cluster [DBG] 7.1f scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 125) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:26.341361+0000 osd.0 (osd.0) 124 : cluster [DBG] 7.1f scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:26.355455+0000 osd.0 (osd.0) 125 : cluster [DBG] 7.1f scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65388544 unmapped: 614400 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:58.005853+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65388544 unmapped: 614400 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:23:59.006012+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcecb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2bcf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 811379 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65413120 unmapped: 589824 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:00.006206+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:29.338632+0000 osd.0 (osd.0) 126 : cluster [DBG] 3.1f scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:29.352770+0000 osd.0 (osd.0) 127 : cluster [DBG] 3.1f scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 127) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:29.338632+0000 osd.0 (osd.0) 126 : cluster [DBG] 3.1f scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:29.352770+0000 osd.0 (osd.0) 127 : cluster [DBG] 3.1f scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65429504 unmapped: 573440 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:01.006543+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65429504 unmapped: 573440 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:02.006671+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:31.330980+0000 osd.0 (osd.0) 128 : cluster [DBG] 3.9 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:31.345162+0000 osd.0 (osd.0) 129 : cluster [DBG] 3.9 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 129) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:31.330980+0000 osd.0 (osd.0) 128 : cluster [DBG] 3.9 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:31.345162+0000 osd.0 (osd.0) 129 : cluster [DBG] 3.9 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65437696 unmapped: 565248 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:03.006846+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:32.339446+0000 osd.0 (osd.0) 130 : cluster [DBG] 3.15 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:32.353581+0000 osd.0 (osd.0) 131 : cluster [DBG] 3.15 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcecb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2bcf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 131) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:32.339446+0000 osd.0 (osd.0) 130 : cluster [DBG] 3.15 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:32.353581+0000 osd.0 (osd.0) 131 : cluster [DBG] 3.15 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:04.007028+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65437696 unmapped: 565248 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 813674 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcecb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2bcf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:05.007154+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65445888 unmapped: 557056 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:06.007331+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:35.341805+0000 osd.0 (osd.0) 132 : cluster [DBG] 3.12 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:35.355849+0000 osd.0 (osd.0) 133 : cluster [DBG] 3.12 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65445888 unmapped: 557056 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 133) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:35.341805+0000 osd.0 (osd.0) 132 : cluster [DBG] 3.12 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:35.355849+0000 osd.0 (osd.0) 133 : cluster [DBG] 3.12 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:07.007535+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65454080 unmapped: 548864 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.946367264s of 10.981134415s, submitted: 10
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcecb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2bcf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:08.007771+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:37.322584+0000 osd.0 (osd.0) 134 : cluster [DBG] 11.10 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:37.336698+0000 osd.0 (osd.0) 135 : cluster [DBG] 11.10 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65462272 unmapped: 540672 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 135) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:37.322584+0000 osd.0 (osd.0) 134 : cluster [DBG] 11.10 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:37.336698+0000 osd.0 (osd.0) 135 : cluster [DBG] 11.10 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:09.007950+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65462272 unmapped: 540672 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817119 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:10.008139+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:39.322185+0000 osd.0 (osd.0) 136 : cluster [DBG] 8.10 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:39.336327+0000 osd.0 (osd.0) 137 : cluster [DBG] 8.10 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65478656 unmapped: 524288 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 137) v1
Nov 24 18:51:58 compute-0 ceph-mon[74927]: from='client.14873 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:39.322185+0000 osd.0 (osd.0) 136 : cluster [DBG] 8.10 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:39.336327+0000 osd.0 (osd.0) 137 : cluster [DBG] 8.10 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:11.008351+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65478656 unmapped: 524288 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-mon[74927]: from='client.14875 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-mon[74927]: from='client.14879 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:12.008470+0000)
Nov 24 18:51:58 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3392560238' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65486848 unmapped: 516096 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcecb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2bcf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3291545781' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:13.008630+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65486848 unmapped: 516096 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcecb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2bcf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:14.008794+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65503232 unmapped: 499712 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.b scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.b scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 818266 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:15.008963+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:44.310399+0000 osd.0 (osd.0) 138 : cluster [DBG] 8.b scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:44.324458+0000 osd.0 (osd.0) 139 : cluster [DBG] 8.b scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65519616 unmapped: 483328 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 139) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:44.310399+0000 osd.0 (osd.0) 138 : cluster [DBG] 8.b scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:44.324458+0000 osd.0 (osd.0) 139 : cluster [DBG] 8.b scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:16.009134+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65519616 unmapped: 483328 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:17.009303+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 141 sent 139 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:46.297200+0000 osd.0 (osd.0) 140 : cluster [DBG] 11.4 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:46.311342+0000 osd.0 (osd.0) 141 : cluster [DBG] 11.4 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65527808 unmapped: 475136 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 141) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:46.297200+0000 osd.0 (osd.0) 140 : cluster [DBG] 11.4 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:46.311342+0000 osd.0 (osd.0) 141 : cluster [DBG] 11.4 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:18.009500+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65527808 unmapped: 475136 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcecb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2bcf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:19.009640+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65527808 unmapped: 475136 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.942500114s of 11.970263481s, submitted: 8
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 820561 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcecb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2bcf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:20.009768+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 143 sent 141 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:49.292865+0000 osd.0 (osd.0) 142 : cluster [DBG] 8.6 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:49.306910+0000 osd.0 (osd.0) 143 : cluster [DBG] 8.6 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65552384 unmapped: 450560 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcecb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2bcf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 143) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:49.292865+0000 osd.0 (osd.0) 142 : cluster [DBG] 8.6 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:49.306910+0000 osd.0 (osd.0) 143 : cluster [DBG] 8.6 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:21.009980+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 145 sent 143 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:50.321890+0000 osd.0 (osd.0) 144 : cluster [DBG] 11.14 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:50.339494+0000 osd.0 (osd.0) 145 : cluster [DBG] 11.14 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65560576 unmapped: 442368 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 145) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:50.321890+0000 osd.0 (osd.0) 144 : cluster [DBG] 11.14 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:50.339494+0000 osd.0 (osd.0) 145 : cluster [DBG] 11.14 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:22.010150+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 147 sent 145 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:51.331944+0000 osd.0 (osd.0) 146 : cluster [DBG] 8.9 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:51.345943+0000 osd.0 (osd.0) 147 : cluster [DBG] 8.9 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 434176 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcecb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2bcf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 147) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:51.331944+0000 osd.0 (osd.0) 146 : cluster [DBG] 8.9 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:51.345943+0000 osd.0 (osd.0) 147 : cluster [DBG] 8.9 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:23.010389+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 149 sent 147 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:52.335957+0000 osd.0 (osd.0) 148 : cluster [DBG] 11.6 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:52.350009+0000 osd.0 (osd.0) 149 : cluster [DBG] 11.6 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65576960 unmapped: 1474560 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 149) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:52.335957+0000 osd.0 (osd.0) 148 : cluster [DBG] 11.6 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:52.350009+0000 osd.0 (osd.0) 149 : cluster [DBG] 11.6 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:24.010552+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65585152 unmapped: 1466368 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 824005 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:25.010709+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65593344 unmapped: 1458176 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:26.010850+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65593344 unmapped: 1458176 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:27.010998+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65593344 unmapped: 1458176 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.f scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.f scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:28.011175+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 151 sent 149 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:57.427335+0000 osd.0 (osd.0) 150 : cluster [DBG] 8.f scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:57.448510+0000 osd.0 (osd.0) 151 : cluster [DBG] 8.f scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65601536 unmapped: 1449984 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcecb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2bcf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.c deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.c deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 151) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:57.427335+0000 osd.0 (osd.0) 150 : cluster [DBG] 8.f scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:57.448510+0000 osd.0 (osd.0) 151 : cluster [DBG] 8.f scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:29.011360+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 153 sent 151 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:58.452291+0000 osd.0 (osd.0) 152 : cluster [DBG] 8.c deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:24:58.466367+0000 osd.0 (osd.0) 153 : cluster [DBG] 8.c deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65601536 unmapped: 1449984 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcecb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2bcf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826299 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 153) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:58.452291+0000 osd.0 (osd.0) 152 : cluster [DBG] 8.c deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:24:58.466367+0000 osd.0 (osd.0) 153 : cluster [DBG] 8.c deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:30.011623+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65601536 unmapped: 1449984 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:31.011747+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65617920 unmapped: 1433600 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcecb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2bcf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:32.011920+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65617920 unmapped: 1433600 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcecb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2bcf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:33.012045+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65626112 unmapped: 1425408 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:34.012108+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 1409024 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826299 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcecb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2bcf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:35.012238+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 1409024 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:36.012341+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 1400832 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:37.012519+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 1400832 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:38.012674+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 1400832 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:39.012814+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65658880 unmapped: 1392640 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826299 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:40.012963+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65658880 unmapped: 1392640 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcecb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2bcf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.f scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.090639114s of 21.129522324s, submitted: 12
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.f scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:41.013082+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 155 sent 153 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:10.422395+0000 osd.0 (osd.0) 154 : cluster [DBG] 11.f scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:10.436538+0000 osd.0 (osd.0) 155 : cluster [DBG] 11.f scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 155) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:10.422395+0000 osd.0 (osd.0) 154 : cluster [DBG] 11.f scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:10.436538+0000 osd.0 (osd.0) 155 : cluster [DBG] 11.f scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65667072 unmapped: 1384448 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.1 deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.1 deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:42.013228+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:11.471340+0000 osd.0 (osd.0) 156 : cluster [DBG] 11.1 deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:11.485395+0000 osd.0 (osd.0) 157 : cluster [DBG] 11.1 deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 157) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:11.471340+0000 osd.0 (osd.0) 156 : cluster [DBG] 11.1 deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:11.485395+0000 osd.0 (osd.0) 157 : cluster [DBG] 11.1 deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65667072 unmapped: 1384448 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:43.013393+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65667072 unmapped: 1384448 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:44.013575+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65675264 unmapped: 1376256 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.e deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.e deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 829742 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:45.013697+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 159 sent 157 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:14.540416+0000 osd.0 (osd.0) 158 : cluster [DBG] 8.e deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:14.554484+0000 osd.0 (osd.0) 159 : cluster [DBG] 8.e deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 159) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:14.540416+0000 osd.0 (osd.0) 158 : cluster [DBG] 8.e deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:14.554484+0000 osd.0 (osd.0) 159 : cluster [DBG] 8.e deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65675264 unmapped: 1376256 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:46.013961+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65683456 unmapped: 1368064 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:47.014295+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65683456 unmapped: 1368064 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:48.014492+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:17.556213+0000 osd.0 (osd.0) 160 : cluster [DBG] 11.19 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:17.570197+0000 osd.0 (osd.0) 161 : cluster [DBG] 11.19 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 161) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:17.556213+0000 osd.0 (osd.0) 160 : cluster [DBG] 11.19 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:17.570197+0000 osd.0 (osd.0) 161 : cluster [DBG] 11.19 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65691648 unmapped: 1359872 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:49.014737+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65691648 unmapped: 1359872 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 832040 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:50.014920+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:19.591138+0000 osd.0 (osd.0) 162 : cluster [DBG] 11.17 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:19.605210+0000 osd.0 (osd.0) 163 : cluster [DBG] 11.17 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 163) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:19.591138+0000 osd.0 (osd.0) 162 : cluster [DBG] 11.17 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:19.605210+0000 osd.0 (osd.0) 163 : cluster [DBG] 11.17 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65699840 unmapped: 1351680 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.e scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.144165993s of 10.187912941s, submitted: 10
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.e scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:51.015130+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:20.610302+0000 osd.0 (osd.0) 164 : cluster [DBG] 11.e scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:20.624381+0000 osd.0 (osd.0) 165 : cluster [DBG] 11.e scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 165) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:20.610302+0000 osd.0 (osd.0) 164 : cluster [DBG] 11.e scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:20.624381+0000 osd.0 (osd.0) 165 : cluster [DBG] 11.e scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65708032 unmapped: 1343488 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:52.015350+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65708032 unmapped: 1343488 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:53.015490+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 167 sent 165 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:22.611181+0000 osd.0 (osd.0) 166 : cluster [DBG] 8.18 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:22.625289+0000 osd.0 (osd.0) 167 : cluster [DBG] 8.18 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65708032 unmapped: 1343488 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 167) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:22.611181+0000 osd.0 (osd.0) 166 : cluster [DBG] 8.18 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:22.625289+0000 osd.0 (osd.0) 167 : cluster [DBG] 8.18 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:54.015653+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65732608 unmapped: 1318912 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 834336 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:55.015780+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65732608 unmapped: 1318912 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:56.015939+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65732608 unmapped: 1318912 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:57.016065+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:26.634017+0000 osd.0 (osd.0) 168 : cluster [DBG] 8.1f scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:26.647951+0000 osd.0 (osd.0) 169 : cluster [DBG] 8.1f scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65740800 unmapped: 1310720 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 169) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:26.634017+0000 osd.0 (osd.0) 168 : cluster [DBG] 8.1f scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:26.647951+0000 osd.0 (osd.0) 169 : cluster [DBG] 8.1f scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:58.016263+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65748992 unmapped: 1302528 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:24:59.016479+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 171 sent 169 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:28.627086+0000 osd.0 (osd.0) 170 : cluster [DBG] 8.1a scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:28.641198+0000 osd.0 (osd.0) 171 : cluster [DBG] 8.1a scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65748992 unmapped: 1302528 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 171) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:28.627086+0000 osd.0 (osd.0) 170 : cluster [DBG] 8.1a scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:28.641198+0000 osd.0 (osd.0) 171 : cluster [DBG] 8.1a scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836632 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:00.016857+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65748992 unmapped: 1302528 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:01.016988+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65748992 unmapped: 1302528 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:02.017156+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65757184 unmapped: 1294336 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:03.017316+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65757184 unmapped: 1294336 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:04.017463+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 1286144 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.14 deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.987627983s of 14.013010025s, submitted: 8
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.14 deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837780 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:05.017622+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:34.623418+0000 osd.0 (osd.0) 172 : cluster [DBG] 8.14 deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:34.641105+0000 osd.0 (osd.0) 173 : cluster [DBG] 8.14 deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65773568 unmapped: 1277952 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 173) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:34.623418+0000 osd.0 (osd.0) 172 : cluster [DBG] 8.14 deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:34.641105+0000 osd.0 (osd.0) 173 : cluster [DBG] 8.14 deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:06.017802+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:35.586226+0000 osd.0 (osd.0) 174 : cluster [DBG] 8.1d scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:35.600392+0000 osd.0 (osd.0) 175 : cluster [DBG] 8.1d scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65773568 unmapped: 1277952 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 175) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:35.586226+0000 osd.0 (osd.0) 174 : cluster [DBG] 8.1d scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:35.600392+0000 osd.0 (osd.0) 175 : cluster [DBG] 8.1d scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:07.017975+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 1269760 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.d scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.d scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:08.018131+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:37.572951+0000 osd.0 (osd.0) 176 : cluster [DBG] 10.d scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:37.590558+0000 osd.0 (osd.0) 177 : cluster [DBG] 10.d scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 1269760 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 177) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:37.572951+0000 osd.0 (osd.0) 176 : cluster [DBG] 10.d scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:37.590558+0000 osd.0 (osd.0) 177 : cluster [DBG] 10.d scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:09.018277+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:38.539434+0000 osd.0 (osd.0) 178 : cluster [DBG] 10.1e scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:38.553525+0000 osd.0 (osd.0) 179 : cluster [DBG] 10.1e scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 1269760 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841225 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 179) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:38.539434+0000 osd.0 (osd.0) 178 : cluster [DBG] 10.1e scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:38.553525+0000 osd.0 (osd.0) 179 : cluster [DBG] 10.1e scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:10.018419+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 1269760 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:11.018554+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:40.533052+0000 osd.0 (osd.0) 180 : cluster [DBG] 10.7 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:40.547181+0000 osd.0 (osd.0) 181 : cluster [DBG] 10.7 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65789952 unmapped: 1261568 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 181) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:40.533052+0000 osd.0 (osd.0) 180 : cluster [DBG] 10.7 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:40.547181+0000 osd.0 (osd.0) 181 : cluster [DBG] 10.7 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:12.018773+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65798144 unmapped: 1253376 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:13.018952+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65798144 unmapped: 1253376 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:14.019108+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 1236992 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842373 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:15.019266+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 1228800 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.753761292s of 10.791145325s, submitted: 10
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:16.019442+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:45.414565+0000 osd.0 (osd.0) 182 : cluster [DBG] 10.4 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:45.428684+0000 osd.0 (osd.0) 183 : cluster [DBG] 10.4 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65863680 unmapped: 1187840 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 183) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:45.414565+0000 osd.0 (osd.0) 182 : cluster [DBG] 10.4 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:45.428684+0000 osd.0 (osd.0) 183 : cluster [DBG] 10.4 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:17.019646+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:46.422807+0000 osd.0 (osd.0) 184 : cluster [DBG] 10.8 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:46.436944+0000 osd.0 (osd.0) 185 : cluster [DBG] 10.8 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 1171456 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 185) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:46.422807+0000 osd.0 (osd.0) 184 : cluster [DBG] 10.8 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:46.436944+0000 osd.0 (osd.0) 185 : cluster [DBG] 10.8 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:18.020936+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:47.374638+0000 osd.0 (osd.0) 186 : cluster [DBG] 10.1 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:47.388631+0000 osd.0 (osd.0) 187 : cluster [DBG] 10.1 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65888256 unmapped: 1163264 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:19.021220+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 187) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:47.374638+0000 osd.0 (osd.0) 186 : cluster [DBG] 10.1 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:47.388631+0000 osd.0 (osd.0) 187 : cluster [DBG] 10.1 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65888256 unmapped: 1163264 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 845817 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:20.021344+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65896448 unmapped: 1155072 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:21.021547+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65888256 unmapped: 1163264 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.e scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.e scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:22.021674+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:51.446140+0000 osd.0 (osd.0) 188 : cluster [DBG] 10.e scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:51.463893+0000 osd.0 (osd.0) 189 : cluster [DBG] 10.e scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 189) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:51.446140+0000 osd.0 (osd.0) 188 : cluster [DBG] 10.e scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:51.463893+0000 osd.0 (osd.0) 189 : cluster [DBG] 10.e scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65896448 unmapped: 1155072 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:23.021984+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65896448 unmapped: 1155072 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:24.022122+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65904640 unmapped: 1146880 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 846965 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:25.022303+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65904640 unmapped: 1146880 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.988506317s of 10.017522812s, submitted: 8
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:26.022662+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:55.432244+0000 osd.0 (osd.0) 190 : cluster [DBG] 10.15 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:55.449716+0000 osd.0 (osd.0) 191 : cluster [DBG] 10.15 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 191) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:55.432244+0000 osd.0 (osd.0) 190 : cluster [DBG] 10.15 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:55.449716+0000 osd.0 (osd.0) 191 : cluster [DBG] 10.15 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65912832 unmapped: 1138688 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:27.022928+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 193 sent 191 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:56.396613+0000 osd.0 (osd.0) 192 : cluster [DBG] 10.17 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:56.410756+0000 osd.0 (osd.0) 193 : cluster [DBG] 10.17 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 193) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:56.396613+0000 osd.0 (osd.0) 192 : cluster [DBG] 10.17 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:56.410756+0000 osd.0 (osd.0) 193 : cluster [DBG] 10.17 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65937408 unmapped: 1114112 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:28.023167+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65937408 unmapped: 1114112 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:29.023332+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 195 sent 193 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:58.433782+0000 osd.0 (osd.0) 194 : cluster [DBG] 10.16 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:25:58.447875+0000 osd.0 (osd.0) 195 : cluster [DBG] 10.16 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 195) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:58.433782+0000 osd.0 (osd.0) 194 : cluster [DBG] 10.16 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:25:58.447875+0000 osd.0 (osd.0) 195 : cluster [DBG] 10.16 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 1097728 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 850412 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:30.023575+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 1097728 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:31.023738+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65970176 unmapped: 1081344 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:32.023879+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 197 sent 195 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:26:01.446124+0000 osd.0 (osd.0) 196 : cluster [DBG] 10.9 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:26:01.463782+0000 osd.0 (osd.0) 197 : cluster [DBG] 10.9 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 197) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:26:01.446124+0000 osd.0 (osd.0) 196 : cluster [DBG] 10.9 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:26:01.463782+0000 osd.0 (osd.0) 197 : cluster [DBG] 10.9 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 1097728 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:33.024113+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 199 sent 197 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:26:02.444692+0000 osd.0 (osd.0) 198 : cluster [DBG] 9.1d scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:26:02.476476+0000 osd.0 (osd.0) 199 : cluster [DBG] 9.1d scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 199) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:26:02.444692+0000 osd.0 (osd.0) 198 : cluster [DBG] 9.1d scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:26:02.476476+0000 osd.0 (osd.0) 199 : cluster [DBG] 9.1d scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65970176 unmapped: 1081344 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:34.024284+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65986560 unmapped: 1064960 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852708 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:35.024407+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65986560 unmapped: 1064960 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:36.024551+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 1056768 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:37.024730+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 1048576 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.11 deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.590790749s of 11.904295921s, submitted: 10
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.11 deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:38.024937+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 201 sent 199 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:26:07.336430+0000 osd.0 (osd.0) 200 : cluster [DBG] 9.11 deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:26:07.368197+0000 osd.0 (osd.0) 201 : cluster [DBG] 9.11 deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 1048576 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 201) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:26:07.336430+0000 osd.0 (osd.0) 200 : cluster [DBG] 9.11 deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:26:07.368197+0000 osd.0 (osd.0) 201 : cluster [DBG] 9.11 deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:39.025113+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 1040384 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855003 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:40.025243+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 203 sent 201 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:26:09.352034+0000 osd.0 (osd.0) 202 : cluster [DBG] 9.5 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:26:09.390814+0000 osd.0 (osd.0) 203 : cluster [DBG] 9.5 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 1040384 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 203) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:26:09.352034+0000 osd.0 (osd.0) 202 : cluster [DBG] 9.5 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:26:09.390814+0000 osd.0 (osd.0) 203 : cluster [DBG] 9.5 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:41.025401+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 1040384 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.b scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.b scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:42.025513+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 205 sent 203 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:26:11.318552+0000 osd.0 (osd.0) 204 : cluster [DBG] 9.b scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:26:11.346818+0000 osd.0 (osd.0) 205 : cluster [DBG] 9.b scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 1024000 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 205) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:26:11.318552+0000 osd.0 (osd.0) 204 : cluster [DBG] 9.b scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:26:11.346818+0000 osd.0 (osd.0) 205 : cluster [DBG] 9.b scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:43.025689+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 1024000 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:44.025827+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 1024000 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:45.025977+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 856150 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 1015808 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:46.026117+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 1015808 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:47.026255+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66043904 unmapped: 1007616 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.d scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.052775383s of 10.091034889s, submitted: 6
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.d scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:48.026413+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 207 sent 205 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:26:17.427524+0000 osd.0 (osd.0) 206 : cluster [DBG] 9.d scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:26:17.469927+0000 osd.0 (osd.0) 207 : cluster [DBG] 9.d scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66043904 unmapped: 1007616 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 207) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:26:17.427524+0000 osd.0 (osd.0) 206 : cluster [DBG] 9.d scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:26:17.469927+0000 osd.0 (osd.0) 207 : cluster [DBG] 9.d scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:49.026629+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 999424 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:50.026754+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 209 sent 207 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:26:19.395672+0000 osd.0 (osd.0) 208 : cluster [DBG] 9.3 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:26:19.438130+0000 osd.0 (osd.0) 209 : cluster [DBG] 9.3 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858444 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 991232 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 209) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:26:19.395672+0000 osd.0 (osd.0) 208 : cluster [DBG] 9.3 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:26:19.438130+0000 osd.0 (osd.0) 209 : cluster [DBG] 9.3 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:51.026945+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 991232 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:52.027067+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 211 sent 209 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:26:21.354195+0000 osd.0 (osd.0) 210 : cluster [DBG] 9.9 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:26:21.385973+0000 osd.0 (osd.0) 211 : cluster [DBG] 9.9 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66068480 unmapped: 983040 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 211) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:26:21.354195+0000 osd.0 (osd.0) 210 : cluster [DBG] 9.9 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:26:21.385973+0000 osd.0 (osd.0) 211 : cluster [DBG] 9.9 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:53.027255+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66068480 unmapped: 983040 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:54.027406+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 213 sent 211 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:26:23.411192+0000 osd.0 (osd.0) 212 : cluster [DBG] 9.1b scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:26:23.432347+0000 osd.0 (osd.0) 213 : cluster [DBG] 9.1b scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66068480 unmapped: 983040 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 213) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:26:23.411192+0000 osd.0 (osd.0) 212 : cluster [DBG] 9.1b scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:26:23.432347+0000 osd.0 (osd.0) 213 : cluster [DBG] 9.1b scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:55.027693+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 860739 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 974848 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:56.027867+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 974848 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:57.028970+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66084864 unmapped: 966656 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.1 deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.1 deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.982617378s of 10.014144897s, submitted: 9
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:58.029317+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 215 sent 213 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:26:27.406308+0000 osd.0 (osd.0) 214 : cluster [DBG] 9.1 deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:26:27.445186+0000 osd.0 (osd.0) 215 : cluster [DBG] 9.1 deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66093056 unmapped: 958464 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 215) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:26:27.406308+0000 osd.0 (osd.0) 214 : cluster [DBG] 9.1 deep-scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:26:27.445186+0000 osd.0 (osd.0) 215 : cluster [DBG] 9.1 deep-scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:25:59.029736+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66101248 unmapped: 950272 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:00.053062+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 861886 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66117632 unmapped: 933888 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:01.053365+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 217 sent 215 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:26:30.314040+0000 osd.0 (osd.0) 216 : cluster [DBG] 9.16 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:26:30.342402+0000 osd.0 (osd.0) 217 : cluster [DBG] 9.16 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66109440 unmapped: 942080 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 217) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:26:30.314040+0000 osd.0 (osd.0) 216 : cluster [DBG] 9.16 scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:26:30.342402+0000 osd.0 (osd.0) 217 : cluster [DBG] 9.16 scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:02.053584+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 219 sent 217 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:26:31.353129+0000 osd.0 (osd.0) 218 : cluster [DBG] 9.1c scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:26:31.392031+0000 osd.0 (osd.0) 219 : cluster [DBG] 9.1c scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66125824 unmapped: 925696 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 219) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:26:31.353129+0000 osd.0 (osd.0) 218 : cluster [DBG] 9.1c scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:26:31.392031+0000 osd.0 (osd.0) 219 : cluster [DBG] 9.1c scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:03.053776+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  log_queue is 2 last_log 221 sent 219 num 2 unsent 2 sending 2
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:26:32.376752+0000 osd.0 (osd.0) 220 : cluster [DBG] 9.1e scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  will send 2025-11-24T18:26:32.408569+0000 osd.0 (osd.0) 221 : cluster [DBG] 9.1e scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66125824 unmapped: 925696 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client handle_log_ack log(last 221) v1
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:26:32.376752+0000 osd.0 (osd.0) 220 : cluster [DBG] 9.1e scrub starts
Nov 24 18:51:58 compute-0 ceph-osd[88544]: log_client  logged 2025-11-24T18:26:32.408569+0000 osd.0 (osd.0) 221 : cluster [DBG] 9.1e scrub ok
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:04.054025+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66134016 unmapped: 917504 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:05.054163+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66134016 unmapped: 917504 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:06.054286+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66142208 unmapped: 909312 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:07.054443+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66142208 unmapped: 909312 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:08.054962+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66142208 unmapped: 909312 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:09.055339+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66142208 unmapped: 909312 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:10.055612+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 901120 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:11.055824+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 901120 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:12.056030+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 901120 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:13.056229+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 892928 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:14.056409+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 892928 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:15.056557+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 884736 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:16.056689+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 884736 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:17.056980+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 884736 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:18.057236+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 876544 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:19.057445+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 876544 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:20.057592+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 868352 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:21.057745+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 868352 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:22.057958+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 868352 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:23.058115+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 860160 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:24.058327+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 860160 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:25.058479+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 851968 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:26.058671+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 851968 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:27.058790+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 843776 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:28.058957+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 843776 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:29.059083+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 843776 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:30.059226+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 835584 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:31.059442+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 835584 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:32.059619+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66232320 unmapped: 819200 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:33.059779+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66232320 unmapped: 819200 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:34.059970+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66232320 unmapped: 819200 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:35.060125+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66240512 unmapped: 811008 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:36.060266+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66240512 unmapped: 811008 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:37.060411+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 802816 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:38.060608+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 802816 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:39.061208+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66256896 unmapped: 794624 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:40.061333+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 786432 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:41.061469+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 786432 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:42.061642+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 778240 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:43.061788+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 778240 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:44.061969+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66281472 unmapped: 770048 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:45.062144+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66281472 unmapped: 770048 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:46.062284+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 761856 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:47.062430+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 761856 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:48.062601+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 753664 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:49.062786+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 753664 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:50.062953+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 753664 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:51.063080+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 753664 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:52.063229+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 753664 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:53.063407+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 753664 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:54.063610+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 753664 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:55.063749+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 753664 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:56.063926+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 753664 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:57.064051+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 745472 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:58.064233+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 745472 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:59.064370+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 745472 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:00.064518+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 737280 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:01.064650+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 737280 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:02.064812+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 729088 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:03.064919+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 729088 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:04.065062+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 729088 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:05.065272+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 720896 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:06.065404+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 720896 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:07.065582+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 712704 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:08.065779+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 712704 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:09.065930+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 704512 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:10.066064+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 704512 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:11.066190+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 704512 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:12.066326+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 696320 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:13.066484+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 696320 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:14.066620+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 696320 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:15.066763+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 696320 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:16.066880+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66363392 unmapped: 688128 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:17.067074+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66363392 unmapped: 688128 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:18.067228+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 679936 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:19.067356+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 679936 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:20.067475+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66363392 unmapped: 688128 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:21.067611+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 679936 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:22.067750+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 679936 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:23.067889+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 679936 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:24.068100+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 679936 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:25.068252+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 671744 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:26.068406+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 663552 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:27.068614+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 663552 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:28.068842+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 663552 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:29.069033+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66404352 unmapped: 647168 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:30.069216+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66404352 unmapped: 647168 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:31.069368+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66404352 unmapped: 647168 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:32.069555+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 638976 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:33.069722+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 638976 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:34.069864+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 630784 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:35.069986+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 622592 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:36.070204+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 622592 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:37.070393+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 614400 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:38.070565+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 614400 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:39.070816+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 606208 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:40.071016+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 606208 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:41.071153+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 606208 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:42.071315+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 598016 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:43.071454+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 598016 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:44.071568+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 589824 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:45.071770+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 581632 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:46.071990+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 581632 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:47.072132+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 581632 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:48.072323+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 573440 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:49.072479+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 573440 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:50.072622+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 573440 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:51.072774+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 565248 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:52.072982+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 565248 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:53.073145+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 557056 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:54.073278+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 557056 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:55.073420+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 557056 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:56.073594+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 548864 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:57.073742+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 548864 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:58.073949+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 548864 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:59.074129+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 540672 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:00.074268+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 540672 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:01.074437+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 532480 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:02.074640+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 532480 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:03.074819+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 532480 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:04.074988+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 524288 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:05.075586+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 524288 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:06.076248+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 516096 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:07.076395+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 516096 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:08.076571+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 516096 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:09.076715+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 507904 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:10.077012+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 507904 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:11.077325+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 507904 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:12.077485+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 499712 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:13.077704+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 499712 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:14.077970+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 499712 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:15.078126+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 499712 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:16.078271+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 499712 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:17.078397+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 499712 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:18.078527+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 491520 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:19.078641+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 491520 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:20.078822+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 499712 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:21.079136+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 491520 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:22.079494+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 491520 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:23.079767+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 483328 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:24.080020+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 483328 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:25.080281+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 483328 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:26.080474+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 475136 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:27.080701+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 475136 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:28.080944+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 466944 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:29.081128+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 466944 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:30.081339+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 458752 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:31.081540+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 450560 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:32.081705+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 450560 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:33.081999+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 442368 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:34.082184+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 434176 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:35.082381+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 442368 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:36.082561+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 442368 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:37.082738+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 434176 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:38.083001+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 434176 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:39.083136+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 425984 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:40.083266+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 425984 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:41.083415+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 425984 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:42.083532+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 417792 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:43.083725+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 417792 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:44.083866+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 409600 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:45.084030+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 409600 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:46.084256+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 409600 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:47.084407+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 401408 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:48.084602+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 401408 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:49.084786+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 401408 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:50.084936+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 393216 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:51.085058+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 393216 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:52.085193+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 393216 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:53.085317+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 385024 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:54.085561+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 385024 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:55.085723+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 385024 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:56.085937+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 376832 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:57.086095+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 376832 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:58.086258+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 376832 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:59.086442+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 376832 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:00.086645+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 376832 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:01.086815+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 368640 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:02.086998+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 368640 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:03.087181+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 360448 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:04.087316+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 360448 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:05.087486+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 376832 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:06.087785+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 368640 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:07.087968+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 368640 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:08.088164+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 368640 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:09.090102+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 360448 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:10.091767+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 360448 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:11.093318+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 360448 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:12.093454+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 352256 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:13.093592+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 352256 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:14.094474+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 352256 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:15.094809+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 344064 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:16.095485+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 344064 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:17.096019+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 335872 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:18.096214+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 335872 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:19.096652+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 335872 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:20.096777+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 327680 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:21.096941+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 327680 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:22.097271+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 327680 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:23.097429+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 319488 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:24.097684+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 319488 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:25.097815+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 311296 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:26.098079+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 311296 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:27.098286+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 311296 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:28.098512+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 303104 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:29.098643+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 294912 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:30.098811+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 294912 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:31.098980+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 286720 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:32.099162+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 286720 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:33.099308+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 286720 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:34.099476+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66772992 unmapped: 278528 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:35.099665+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 270336 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:36.099835+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 262144 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:37.100033+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 262144 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:38.100213+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 262144 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:39.100420+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 253952 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:40.100545+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 245760 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:41.100722+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 237568 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:42.101068+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 237568 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:43.101235+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 237568 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:44.101358+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:45.101508+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 237568 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:46.101749+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 221184 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:47.101920+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 221184 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:48.102128+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 212992 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:49.102327+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 212992 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:50.102502+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 204800 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:51.102647+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 196608 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:52.102770+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 196608 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:53.102928+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 188416 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:54.103140+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 188416 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:55.103326+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 180224 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:56.103475+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 180224 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:57.103626+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 180224 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:58.103828+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 172032 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:59.104640+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 172032 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:00.104817+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 172032 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:01.105031+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 163840 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:02.105225+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 163840 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:03.105390+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 163840 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Cumulative writes: 5370 writes, 23K keys, 5370 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5370 writes, 751 syncs, 7.15 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5370 writes, 23K keys, 5370 commit groups, 1.0 writes per commit group, ingest: 18.36 MB, 0.03 MB/s
                                           Interval WAL: 5370 writes, 751 syncs, 7.15 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:04.105745+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 90112 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:05.105877+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 81920 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:06.106029+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 73728 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:07.106158+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 73728 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:08.106354+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 65536 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:09.106622+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 65536 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:10.106869+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 65536 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:11.107025+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 65536 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:12.107270+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 57344 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:13.107456+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 57344 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:14.107928+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 49152 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:15.108333+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 49152 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:16.108738+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 49152 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:17.110354+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 40960 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:18.110546+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 40960 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:19.111356+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 32768 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:20.112308+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 32768 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:21.112754+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 32768 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:22.113240+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 24576 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:23.113520+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 24576 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:24.113735+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 16384 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:25.114418+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 16384 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:26.114730+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 16384 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:27.114859+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 8192 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:28.115035+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 8192 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:29.115158+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 8192 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:30.115325+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 0 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:31.115452+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 0 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:32.115733+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 0 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:33.115877+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 1040384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:34.115970+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 1040384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:35.116136+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 1040384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:36.116296+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 1032192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:37.116637+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 1032192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:38.116961+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 1024000 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:39.117147+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 1024000 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:40.117273+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 1024000 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:41.117440+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 1024000 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:42.117597+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 1024000 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:43.117770+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 1024000 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:44.117907+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1015808 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:45.118178+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1015808 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:46.118337+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1015808 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:47.118525+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 1007616 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:48.118685+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 1007616 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:49.118816+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 999424 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:50.118952+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 999424 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:51.119123+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 999424 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:52.119250+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 991232 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:53.119381+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 991232 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:54.119492+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 991232 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:55.119627+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 983040 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:56.119769+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 983040 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:57.119927+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 974848 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:58.120113+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 974848 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:59.120251+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 966656 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:00.120394+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 966656 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:01.120518+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 966656 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:02.120711+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 958464 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:03.120919+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 958464 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:04.121082+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 958464 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:05.121214+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 950272 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:06.121403+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 942080 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:07.121596+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 942080 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:08.121814+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 933888 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:09.122012+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 933888 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:10.122197+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 925696 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:11.122365+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 925696 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:12.122523+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 925696 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:13.122644+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 917504 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:14.122759+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 917504 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:15.122971+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 901120 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:16.123081+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 901120 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:17.123256+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 901120 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:18.123469+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 901120 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:19.123625+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 892928 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:20.123769+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 892928 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:21.124028+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 892928 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:22.124154+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 884736 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:23.124292+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 884736 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:24.124466+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 876544 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:25.124606+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 876544 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:26.124761+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 868352 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:27.124948+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 868352 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:28.125114+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 868352 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:29.125273+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 868352 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 332.425140381s of 332.453826904s, submitted: 7
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:30.125410+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 540672 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:31.125522+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68878336 unmapped: 270336 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:32.125655+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68878336 unmapped: 270336 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:33.125802+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68878336 unmapped: 270336 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:34.125958+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68902912 unmapped: 245760 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:35.126071+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68927488 unmapped: 221184 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:36.126216+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68935680 unmapped: 212992 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:37.126352+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68935680 unmapped: 212992 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:38.126555+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68935680 unmapped: 212992 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:39.126673+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68943872 unmapped: 204800 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:40.126771+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68943872 unmapped: 204800 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:41.126910+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68952064 unmapped: 196608 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:42.127055+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68952064 unmapped: 196608 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:43.127172+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68960256 unmapped: 188416 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:44.127313+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68960256 unmapped: 188416 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:45.127495+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68960256 unmapped: 188416 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:46.127656+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 180224 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:47.127819+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 180224 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:48.127989+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68976640 unmapped: 172032 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:49.128132+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68976640 unmapped: 172032 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:50.128957+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 155648 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:51.129083+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 155648 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:52.129479+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 155648 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:53.129698+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69001216 unmapped: 147456 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:54.129920+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69001216 unmapped: 147456 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:55.130276+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69001216 unmapped: 147456 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:56.130443+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69009408 unmapped: 139264 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:57.130649+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69009408 unmapped: 139264 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:58.130826+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 131072 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:59.130969+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 131072 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:00.131272+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69025792 unmapped: 122880 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:01.131527+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69025792 unmapped: 122880 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:02.131644+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 114688 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:03.131822+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 114688 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:04.131980+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 114688 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:05.132145+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 106496 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:06.132323+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 106496 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:07.132440+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 106496 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:08.132618+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 106496 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:09.132810+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 106496 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:10.132963+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 106496 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:11.133115+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 106496 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:12.133294+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 106496 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:13.133443+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 106496 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:14.133602+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 106496 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:15.133720+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 106496 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:16.133861+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 106496 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:17.133953+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 106496 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:18.134140+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 106496 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:19.134294+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 106496 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:20.134444+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 98304 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:21.134641+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 98304 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:22.134761+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 98304 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:23.135000+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 98304 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:24.135162+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 98304 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:25.135316+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 98304 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:26.135511+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 98304 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:27.135690+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 98304 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:28.135843+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 98304 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:29.135961+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 98304 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:30.136103+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69058560 unmapped: 90112 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:31.136255+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69058560 unmapped: 90112 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:32.136405+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69058560 unmapped: 90112 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:33.136604+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69058560 unmapped: 90112 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:34.136750+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 81920 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:35.136988+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 73728 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:36.137144+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 73728 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:37.137378+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 73728 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:38.137594+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 73728 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:39.137775+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 73728 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:40.137982+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 73728 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:41.138174+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 73728 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:42.138307+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 73728 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:43.138431+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 73728 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:44.138556+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 57344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:45.138675+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 57344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:46.138863+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 57344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:47.139028+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 57344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:48.139684+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 57344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:49.139814+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 57344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:50.139963+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 49152 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:51.140146+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 40960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:52.140270+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 40960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:53.140479+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 40960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:54.140715+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 40960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:55.140857+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 40960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:56.141074+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 40960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:57.141227+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 40960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:58.141377+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 40960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:59.141547+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 40960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:00.141734+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 40960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:01.141963+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 40960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:02.142080+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 40960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:03.142206+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 40960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:04.142395+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 24576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:05.142700+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 16384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:06.142846+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 16384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:07.142994+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 16384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:08.143165+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 16384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:09.143417+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 16384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:10.143590+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 16384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:11.143716+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 16384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:12.144021+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 16384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:13.144183+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 16384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:14.144360+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 8192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:15.144502+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 8192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:16.144658+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 8192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:17.144818+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 8192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:18.145013+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 0 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:19.145198+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 0 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:20.145321+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 0 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:21.145448+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 0 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:22.145686+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 0 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:23.145827+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 0 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:24.146003+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69165056 unmapped: 1032192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:25.146213+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69165056 unmapped: 1032192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:26.146378+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69165056 unmapped: 1032192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:27.146532+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69165056 unmapped: 1032192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:28.147488+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69165056 unmapped: 1032192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:29.148207+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69165056 unmapped: 1032192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:30.148576+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:31.148961+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:32.149092+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:33.149291+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:34.149687+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:35.149805+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:36.149935+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:37.150051+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:38.150220+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:39.150364+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:40.150534+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69189632 unmapped: 1007616 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:41.150738+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69189632 unmapped: 1007616 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:42.150847+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69189632 unmapped: 1007616 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:43.150975+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69189632 unmapped: 1007616 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:44.151188+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:45.151335+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:46.151469+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:47.151616+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:48.151845+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:49.152034+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:50.152158+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:51.152298+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:52.152496+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:53.152655+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:54.152881+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:55.153126+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:56.153270+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:57.153398+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:58.153603+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:59.153731+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69214208 unmapped: 983040 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:00.153861+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69214208 unmapped: 983040 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:01.153957+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69214208 unmapped: 983040 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:02.154735+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69214208 unmapped: 983040 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:03.154874+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69214208 unmapped: 983040 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:04.155132+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:05.155275+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:06.155473+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:07.155698+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:08.155976+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:09.156230+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:10.156477+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:11.156679+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 942080 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:12.156864+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 942080 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:13.157065+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 942080 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:14.157348+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 942080 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:15.157565+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 942080 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:16.157747+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 942080 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:17.157990+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 942080 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:18.158156+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 942080 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:19.158307+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 942080 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:20.158496+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 942080 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:21.158667+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 942080 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:22.158796+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 942080 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:23.158975+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 942080 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:24.159119+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69271552 unmapped: 925696 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:25.159313+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69271552 unmapped: 925696 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:26.159496+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69271552 unmapped: 925696 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:27.159804+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69271552 unmapped: 925696 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:28.160013+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69271552 unmapped: 925696 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:29.160128+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69271552 unmapped: 925696 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:30.160341+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 917504 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:31.161496+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 917504 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:32.161630+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 917504 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:33.161831+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 917504 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:34.161960+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 917504 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:35.162099+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 917504 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:36.162285+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69287936 unmapped: 909312 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:37.162438+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69287936 unmapped: 909312 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:38.162609+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69287936 unmapped: 909312 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:39.162734+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69287936 unmapped: 909312 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:40.162912+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69287936 unmapped: 909312 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:41.163165+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69287936 unmapped: 909312 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:42.163366+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69296128 unmapped: 901120 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:43.163540+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69296128 unmapped: 901120 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:44.163686+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69320704 unmapped: 876544 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:45.163923+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69320704 unmapped: 876544 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:46.164105+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69320704 unmapped: 876544 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:47.164357+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69320704 unmapped: 876544 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:48.164577+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69320704 unmapped: 876544 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:49.164738+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69320704 unmapped: 876544 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:50.164930+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 868352 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:51.165100+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 868352 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:52.165241+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 868352 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:53.165379+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 868352 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:54.165513+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 868352 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:55.165673+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 868352 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:56.165834+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 868352 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:57.166029+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 868352 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:58.166153+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 868352 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:59.166280+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 868352 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:00.166414+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:01.166534+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 868352 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:02.166666+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 868352 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:03.166803+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 868352 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:04.166984+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 868352 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:05.167932+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69345280 unmapped: 851968 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:06.168124+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69353472 unmapped: 843776 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:07.168331+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69353472 unmapped: 843776 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:08.168546+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69353472 unmapped: 843776 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:09.168677+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69353472 unmapped: 843776 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:10.168800+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69353472 unmapped: 843776 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:11.168930+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69353472 unmapped: 843776 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:12.169058+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69353472 unmapped: 843776 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:13.169197+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69353472 unmapped: 843776 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: mgrc ms_handle_reset ms_handle_reset con 0x55ab26045c00
Nov 24 18:51:58 compute-0 ceph-osd[88544]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/536471675
Nov 24 18:51:58 compute-0 ceph-osd[88544]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/536471675,v1:192.168.122.100:6801/536471675]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: get_auth_request con 0x55ab27959800 auth_method 0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: mgrc handle_mgr_configure stats_period=5
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:14.169369+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:15.169484+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:16.169602+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:17.169819+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:18.169949+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:19.170076+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:20.170192+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 ms_handle_reset con 0x55ab26d95c00 session 0x55ab25fd8960
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26292400
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:21.170434+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:22.170571+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:23.170706+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:24.170849+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:25.171009+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:26.171154+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:27.171312+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:28.171505+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:29.171640+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:30.171761+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:31.171947+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:32.172184+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:33.172410+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:34.172618+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:35.172871+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:36.173137+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:37.173319+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:38.173587+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:39.173828+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:40.174008+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:41.174731+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:42.174958+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:43.175181+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:44.175462+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:45.175667+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:46.175953+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69517312 unmapped: 679936 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:47.176115+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69517312 unmapped: 679936 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:48.176373+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69517312 unmapped: 679936 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:49.176653+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69517312 unmapped: 679936 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:50.177011+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69517312 unmapped: 679936 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:51.177194+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69525504 unmapped: 671744 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:52.177345+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69525504 unmapped: 671744 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:53.177505+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69525504 unmapped: 671744 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:54.177812+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69525504 unmapped: 671744 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:55.178067+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69525504 unmapped: 671744 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:56.178297+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69525504 unmapped: 671744 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:57.178474+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69525504 unmapped: 671744 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:58.178611+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69525504 unmapped: 671744 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:59.178815+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69525504 unmapped: 671744 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:00.178973+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69533696 unmapped: 663552 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:01.179115+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69533696 unmapped: 663552 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:02.179269+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69533696 unmapped: 663552 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:03.179523+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69533696 unmapped: 663552 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:04.179713+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69533696 unmapped: 663552 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:05.179868+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 647168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:06.180080+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 647168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:07.180287+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 647168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:08.180837+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 647168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:09.180971+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 647168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:10.181083+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 647168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:11.181218+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 647168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:12.181309+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 647168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:13.181413+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 647168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:14.181516+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 647168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:15.181673+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 647168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:16.181831+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 647168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:17.181959+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 647168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:18.182119+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 647168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:19.182270+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 647168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:20.182432+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69558272 unmapped: 638976 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:21.182576+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69558272 unmapped: 638976 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:22.182737+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69558272 unmapped: 638976 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 ms_handle_reset con 0x55ab27571400 session 0x55ab273dd2c0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26d95c00
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:23.182927+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 606208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:24.183056+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 606208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:25.183169+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 606208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:26.183270+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 606208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:27.183428+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 606208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:28.183605+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 606208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:29.183754+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 606208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:30.183891+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 606208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:31.184101+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 606208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:32.184276+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 606208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:33.184483+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 606208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:34.184736+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 606208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:35.184872+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 606208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:36.185014+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 606208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:37.185168+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 606208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:38.185318+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 606208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:39.185453+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69607424 unmapped: 589824 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:40.185601+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69607424 unmapped: 589824 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:41.185748+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69607424 unmapped: 589824 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:42.185905+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69607424 unmapped: 589824 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:43.185977+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69607424 unmapped: 589824 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:44.186352+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69607424 unmapped: 589824 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:45.186478+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69615616 unmapped: 581632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:46.186640+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69615616 unmapped: 581632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:47.186801+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69615616 unmapped: 581632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:48.186983+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69615616 unmapped: 581632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:49.187123+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69615616 unmapped: 581632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:50.187261+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 573440 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:51.187411+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 573440 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:52.187565+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 573440 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:53.187683+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 573440 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:54.187830+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 573440 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:55.187966+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 573440 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:56.188097+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 573440 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:57.188215+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 573440 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:58.188387+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 573440 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:59.188528+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 557056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:00.188769+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 557056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:01.188970+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 557056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:02.189109+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 557056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:03.189267+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 557056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:04.189488+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 557056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:05.189640+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 557056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:06.189889+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 557056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Nov 24 18:51:58 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1022376122' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:07.190166+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 557056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:08.190346+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 557056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:09.190573+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 557056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:10.190730+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 557056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:11.190918+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 557056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:12.191028+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 557056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:13.191173+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 557056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:14.191332+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 548864 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:15.191464+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 548864 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:16.191641+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 548864 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:17.191769+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 548864 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:18.192341+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 548864 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:19.192464+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 532480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:20.192604+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 532480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:21.192729+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 532480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:22.192937+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 532480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:23.193079+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 532480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:24.193195+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 532480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:25.193321+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 532480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:26.193460+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 532480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:27.193579+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 532480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:28.193755+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 532480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:29.193923+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 532480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:30.194102+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 532480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:31.194244+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 532480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:32.194357+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 532480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:33.194471+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 532480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:34.194609+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 532480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:35.194729+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 532480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:36.194842+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 532480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:37.194966+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 532480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:38.195130+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 532480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:39.195280+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69681152 unmapped: 516096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:40.195446+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69681152 unmapped: 516096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:41.195597+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69681152 unmapped: 516096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:42.195792+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69681152 unmapped: 516096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:43.196017+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69681152 unmapped: 516096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:44.196135+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69681152 unmapped: 516096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:45.196489+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:46.196856+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:47.196974+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69697536 unmapped: 499712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:48.197149+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69697536 unmapped: 499712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:49.197405+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69697536 unmapped: 499712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:50.197522+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69705728 unmapped: 491520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:51.197723+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69705728 unmapped: 491520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:52.197888+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69705728 unmapped: 491520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:53.198018+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69705728 unmapped: 491520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:54.198139+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69705728 unmapped: 491520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:55.198394+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69713920 unmapped: 483328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:56.198512+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69713920 unmapped: 483328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:57.198762+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69713920 unmapped: 483328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:58.198954+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69713920 unmapped: 483328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:59.199150+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69713920 unmapped: 483328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:00.199275+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69713920 unmapped: 483328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:01.199394+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69713920 unmapped: 483328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:02.199586+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69713920 unmapped: 483328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:03.199779+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69713920 unmapped: 483328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:04.199960+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69713920 unmapped: 483328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:05.200110+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:06.200255+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:07.200366+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:08.200566+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:09.200716+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:10.200836+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:11.200971+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:12.201218+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:13.201342+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:14.201493+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:15.201664+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:16.201801+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:17.202077+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:18.202355+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:19.202483+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:20.202594+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:21.202733+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:22.202941+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:23.203067+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:24.203186+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:25.203423+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:26.204135+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:27.204264+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:28.204400+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:29.204526+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:30.204634+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:31.204812+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:32.204958+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:33.205098+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:34.205286+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:35.205406+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:36.205564+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:37.205693+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:38.205835+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:39.206011+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:40.206140+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:41.206252+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:42.206447+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:43.206607+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:44.206739+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:45.206882+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:46.207089+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:47.207212+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:48.207398+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:49.207539+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:50.207717+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:51.208186+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:52.208568+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:53.208953+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:54.209159+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:55.209279+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:56.209464+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:57.209618+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:58.209805+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:59.209968+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:00.210303+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:01.210452+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:02.210578+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:03.210695+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69746688 unmapped: 450560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:04.210840+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69746688 unmapped: 450560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:05.211003+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69746688 unmapped: 450560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:06.211132+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69746688 unmapped: 450560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:07.211257+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69746688 unmapped: 450560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:08.211421+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69746688 unmapped: 450560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:09.211538+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69746688 unmapped: 450560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:10.211654+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69746688 unmapped: 450560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:11.211800+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69746688 unmapped: 450560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:12.211936+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69746688 unmapped: 450560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:13.212064+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69746688 unmapped: 450560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:14.212226+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69746688 unmapped: 450560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:15.212360+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 442368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:16.212643+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 442368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:17.212826+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 442368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:18.212950+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 442368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:19.213092+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 442368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:20.213253+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 442368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:21.213397+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 442368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:22.213561+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 442368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:23.213735+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 442368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:24.213881+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 442368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:25.214040+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 442368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:26.214158+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 442368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:27.214289+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 442368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:28.214503+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 442368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:29.214622+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 442368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:30.214771+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 442368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:31.214919+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 442368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:32.215048+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 442368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:33.215240+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 442368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:34.215380+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 442368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:35.215512+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:36.215673+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:37.215840+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:38.216007+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:39.216150+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:40.216345+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:41.216498+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:42.216660+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:43.216781+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:44.216923+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:45.217067+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:46.217197+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:47.217382+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:48.217573+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:49.217735+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:50.217974+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:51.218205+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:52.218379+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:53.218497+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:54.218617+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:55.218722+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:56.218878+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:57.219044+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:58.219236+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:59.219374+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:00.219518+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:01.219752+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:02.219925+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:03.220093+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Cumulative writes: 5582 writes, 23K keys, 5582 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5582 writes, 857 syncs, 6.51 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 385024 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:04.220264+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:05.220413+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:06.220536+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:07.220639+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:08.220797+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:09.220937+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:10.221068+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:11.221191+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:12.221301+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:13.221440+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:14.221607+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:15.221757+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:16.222489+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:17.222701+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:18.222946+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:19.223048+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:20.223171+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:21.223300+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:22.223392+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:23.223589+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:24.223812+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:25.224003+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:26.224153+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:27.224280+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:28.224430+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:29.224592+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:30.224712+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:31.224962+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:32.225125+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:33.225247+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:34.225397+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:35.225593+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:36.225725+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:37.225840+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:38.225977+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:39.226105+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:40.226256+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:41.226380+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:42.226490+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:43.226619+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:44.226686+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:45.226765+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:46.226949+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:47.227075+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:48.227309+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:49.227462+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:50.227575+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:51.227720+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:52.227837+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:53.227985+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:54.228109+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:55.228294+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:56.228458+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:57.228588+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:58.228745+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:59.228951+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:00.229184+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:01.229341+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:02.229454+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:03.229617+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:04.229749+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:05.229882+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:06.230062+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:07.230173+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:08.230361+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:09.230490+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:10.230591+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:11.230703+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:12.230822+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:13.230947+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:14.231089+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:15.231234+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:16.231407+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:17.231531+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:18.231666+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:19.231782+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:20.231920+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:21.232099+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:22.232235+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:23.232361+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:24.232527+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:25.232630+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:26.232752+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:27.232920+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:28.233064+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:29.233214+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 360448 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 599.843994141s of 600.182861328s, submitted: 106
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:30.233330+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69910528 unmapped: 1335296 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:31.233479+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 1245184 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:32.233651+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 1245184 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:33.233793+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 1245184 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:34.233934+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 1245184 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:35.234303+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 1245184 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:36.234495+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 1245184 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:37.234637+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 1245184 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:38.234779+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 1245184 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:39.234975+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:40.235132+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:41.235256+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:42.235393+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:43.235537+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:44.235782+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:45.235890+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:46.236037+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:47.236183+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:48.236361+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:49.236671+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:50.236843+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:51.237006+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:52.237132+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:53.237287+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:54.237840+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:55.237948+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:56.238098+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:57.238245+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:58.238392+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:59.238513+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1212416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:00.238639+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1212416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:01.238824+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1212416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:02.238997+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1212416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:03.239129+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1212416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:04.239252+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1212416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:05.239391+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1212416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:06.239550+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1212416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:07.239697+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1212416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:08.239858+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1212416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:09.239943+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1212416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:10.240064+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1212416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:11.240203+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1212416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:12.240317+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1212416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:13.240417+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1212416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:14.240516+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70041600 unmapped: 1204224 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:15.240641+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70041600 unmapped: 1204224 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:16.240751+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70041600 unmapped: 1204224 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:17.240998+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70041600 unmapped: 1204224 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:18.241917+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70041600 unmapped: 1204224 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:19.242023+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70057984 unmapped: 1187840 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:20.242137+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:21.242295+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:22.242461+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:23.242631+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:24.242800+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:25.242995+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:26.243106+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:27.243243+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:28.243452+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:29.243591+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:30.243714+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:31.243842+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:32.244707+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:33.244846+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:34.244972+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:35.245095+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:36.245218+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:37.245397+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:38.245600+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:39.245718+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:40.245866+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:41.246024+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:42.246196+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:43.246368+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:44.246511+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:45.246685+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:46.246799+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:47.246958+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:48.247167+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:49.247353+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:50.247518+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:51.247705+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:52.247837+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:53.247961+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:54.248074+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:55.248226+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:56.248388+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:57.248518+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:58.249194+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:59.249309+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 1138688 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:00.249427+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 1138688 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:01.249610+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 1138688 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:02.249728+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 1138688 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:03.249855+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 1138688 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:04.249999+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 1138688 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:05.250125+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 1138688 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:06.250260+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1130496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:07.251066+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1130496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:08.251857+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1130496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:09.252513+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1130496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:10.252821+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1130496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:11.253216+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1130496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:12.253569+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1130496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:13.254012+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1130496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:14.254451+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1130496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:15.254710+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1130496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:16.255008+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1130496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:17.255286+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1130496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:18.255533+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1130496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:19.255746+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1114112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:20.256010+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1114112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:21.256171+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1114112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:22.256401+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1114112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:23.256583+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1114112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:24.256746+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1114112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:25.256992+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1114112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:26.257185+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1114112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:27.257367+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1114112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:28.257600+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1114112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:29.257780+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1114112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:30.257953+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1114112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:31.258117+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1114112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:32.258295+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1114112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:33.258462+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1114112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:34.258586+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 1105920 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:35.258793+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 1105920 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:36.258995+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 1105920 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:37.259156+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 1105920 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:38.259330+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 1105920 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:39.260678+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:40.262862+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:41.264668+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:42.265444+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:43.266986+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:44.268003+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:45.268315+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:46.269474+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:47.270072+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:48.271026+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:49.271780+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:50.272104+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:51.272742+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:52.273239+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:53.273471+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:54.274024+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:55.274329+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:56.274605+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:57.275047+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:58.275344+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:59.275600+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:00.275856+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:01.275964+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:02.276122+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:03.276297+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:04.276604+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:05.276760+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:06.276970+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:07.277239+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:08.277527+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:09.277751+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:10.278002+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:11.278276+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:12.279353+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:13.280151+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:14.280395+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:15.281149+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:16.281629+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:17.282317+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:18.282988+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:19.283320+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:20.283754+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:21.284131+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:22.284307+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:23.284694+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:24.284994+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:25.285383+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:26.285793+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:27.286025+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:28.286200+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:29.286406+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:30.286692+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:31.286948+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:32.287185+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:33.287412+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:34.287618+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:35.287763+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:36.287945+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:37.288137+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:38.288387+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:39.288512+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:40.288768+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:41.288963+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:42.289136+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:43.289290+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:44.289427+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:45.289780+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:46.290467+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:47.291025+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:48.291316+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:49.291809+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:50.292263+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:51.292721+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:52.293184+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:53.293533+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:54.293793+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:55.294123+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:56.294337+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:57.294470+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:58.294748+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:59.295004+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:00.295250+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:01.295491+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:02.295711+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:03.295976+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:04.296115+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:05.296260+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:06.296439+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:07.296615+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:08.298432+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:09.298565+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:10.298780+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:11.298965+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:12.299135+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:13.299272+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:14.299513+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:15.299735+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:16.299936+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:17.300073+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:18.300293+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:19.300446+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:20.300618+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:21.300755+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:22.300885+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:23.301060+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:24.301183+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:25.301390+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:26.301551+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:27.301707+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:28.301997+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:29.302182+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:30.302402+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:31.302588+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:32.302750+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:33.302936+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:34.303089+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:35.303269+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:36.303504+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:37.303726+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:38.303971+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:39.304144+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:40.304286+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:41.304457+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:42.304642+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:43.304819+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:44.305005+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:45.305162+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:46.305305+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:47.305488+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:48.305666+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:49.305800+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:50.305965+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:51.306128+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:52.306327+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:53.306488+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:54.306642+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:55.306835+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:56.306973+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:57.307144+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:58.307345+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:59.307518+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:00.307712+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:01.307831+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:02.308033+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:03.308182+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:04.308313+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:05.308456+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:06.308623+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:07.308775+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:08.308985+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:09.309127+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:10.309307+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:11.309463+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:12.309625+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:13.309752+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:14.309893+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:15.310082+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:16.310283+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:17.310424+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:18.311080+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:19.311231+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:20.311438+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:21.311596+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:22.314972+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:23.319505+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:24.320327+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:25.322274+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:26.323995+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:27.324141+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:28.324806+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:29.325282+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:30.326231+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70221824 unmapped: 1024000 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:31.326806+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70221824 unmapped: 1024000 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:32.327550+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70221824 unmapped: 1024000 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:33.328245+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70221824 unmapped: 1024000 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:34.328580+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70221824 unmapped: 1024000 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:35.328801+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70221824 unmapped: 1024000 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:36.329473+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70221824 unmapped: 1024000 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:37.329614+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70221824 unmapped: 1024000 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:38.330145+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70221824 unmapped: 1024000 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:39.330275+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70238208 unmapped: 1007616 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:40.330637+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70238208 unmapped: 1007616 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:41.330960+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70238208 unmapped: 1007616 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:42.331219+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70238208 unmapped: 1007616 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:43.331436+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70238208 unmapped: 1007616 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:44.331601+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:45.331770+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:46.331996+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:47.332140+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:48.332357+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:49.332492+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:50.332650+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:51.332783+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:52.332955+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:53.333119+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:54.333239+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:55.333387+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:56.333505+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:57.333639+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:58.333824+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:59.333988+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:00.334118+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:01.334238+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:02.334408+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:03.334558+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:04.334720+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:05.334865+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:06.334997+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:07.335130+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:08.335364+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:09.335515+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:10.335649+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:11.335811+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:12.335998+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:13.336185+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:14.336337+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:15.336465+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:16.336617+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:17.336781+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:18.336978+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:19.337112+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:20.337304+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:21.337430+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:22.337578+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:23.337713+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:24.337839+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:25.338037+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:26.338292+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:27.338613+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:28.339745+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:29.339946+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:30.340060+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:31.340317+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:32.340959+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:33.341471+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:34.342043+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:35.342526+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:36.342770+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:37.343032+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:38.343308+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:39.343524+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:40.343747+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:41.343934+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:42.344255+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:43.344497+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:44.344638+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:45.344873+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 120 handle_osd_map epochs [121,122], i have 120, src has [1,122]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 375.130676270s of 375.512084961s, submitted: 106
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab27571400
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70238208 unmapped: 1007616 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:46.345038+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 10248192 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:47.345385+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 123 ms_handle_reset con 0x55ab27571400 session 0x55ab2683e3c0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70320128 unmapped: 10240000 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:48.345693+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70320128 unmapped: 10240000 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:49.345830+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab2754c800
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938461 data_alloc: 218103808 data_used: 159744
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 123 heartbeat osd_stat(store_statfs(0x4fc640000/0x0/0x4ffc00000, data 0x52031a/0x5dd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 123 ms_handle_reset con 0x55ab2754c800 session 0x55ab25fd8960
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70377472 unmapped: 10182656 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:50.346009+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 10174464 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:51.346128+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 124 heartbeat osd_stat(store_statfs(0x4fc1d1000/0x0/0x4ffc00000, data 0x99031a/0xa4d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:52.346414+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:53.346578+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:54.346763+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942795 data_alloc: 218103808 data_used: 172032
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 124 heartbeat osd_stat(store_statfs(0x4fc1cd000/0x0/0x4ffc00000, data 0x991eb3/0xa50000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 124 handle_osd_map epochs [125,125], i have 125, src has [1,125]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:55.346937+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:56.347093+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fc1ca000/0x0/0x4ffc00000, data 0x993916/0xa53000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:57.347287+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:58.347465+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:59.347611+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945769 data_alloc: 218103808 data_used: 172032
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:00.347789+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:01.348005+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fc1ca000/0x0/0x4ffc00000, data 0x993916/0xa53000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:02.348147+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:03.348340+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:04.348503+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945769 data_alloc: 218103808 data_used: 172032
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:05.348673+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:06.348807+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:07.348939+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fc1ca000/0x0/0x4ffc00000, data 0x993916/0xa53000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:08.349124+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:09.349343+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945929 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:10.349538+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:11.349726+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:12.349892+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fc1ca000/0x0/0x4ffc00000, data 0x993916/0xa53000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:13.350106+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fc1ca000/0x0/0x4ffc00000, data 0x993916/0xa53000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:14.350280+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945929 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:15.350457+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:16.350625+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:17.350755+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:18.350972+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fc1ca000/0x0/0x4ffc00000, data 0x993916/0xa53000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:19.351159+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fc1ca000/0x0/0x4ffc00000, data 0x993916/0xa53000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945929 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:20.351334+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:21.351474+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fc1ca000/0x0/0x4ffc00000, data 0x993916/0xa53000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 125 handle_osd_map epochs [126,126], i have 126, src has [1,126]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 36.031490326s of 36.668552399s, submitted: 38
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:22.351626+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab2754cc00
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70459392 unmapped: 10100736 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:23.351775+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fc1c6000/0x0/0x4ffc00000, data 0x995493/0xa56000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 126 handle_osd_map epochs [126,127], i have 126, src has [1,127]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70459392 unmapped: 10100736 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:24.351972+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 127 ms_handle_reset con 0x55ab2754cc00 session 0x55ab29446b40
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953323 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70508544 unmapped: 10051584 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:25.352147+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fc1c4000/0x0/0x4ffc00000, data 0x997064/0xa59000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70508544 unmapped: 10051584 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:26.352326+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26d94800
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 128 ms_handle_reset con 0x55ab26d94800 session 0x55ab28eb4000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70533120 unmapped: 10027008 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:27.352478+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 128 heartbeat osd_stat(store_statfs(0x4fc1c4000/0x0/0x4ffc00000, data 0x997064/0xa59000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70533120 unmapped: 10027008 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:28.352639+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab290a3400
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70533120 unmapped: 10027008 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:29.352775+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033441 data_alloc: 218103808 data_used: 176128
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab290a3000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 18243584 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:30.352915+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 128 ms_handle_reset con 0x55ab290a3000 session 0x55ab29092000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 18128896 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:31.353064+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 128 heartbeat osd_stat(store_statfs(0x4fa1c0000/0x0/0x4ffc00000, data 0x2998c30/0x2a5e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 128 handle_osd_map epochs [129,129], i have 129, src has [1,129]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab290a2c00
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 129 heartbeat osd_stat(store_statfs(0x4fa1c0000/0x0/0x4ffc00000, data 0x2998c30/0x2a5e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab290a2400
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 130 ms_handle_reset con 0x55ab290a3400 session 0x55ab293d34a0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 130 ms_handle_reset con 0x55ab290a2c00 session 0x55ab290921e0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 18087936 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:32.353208+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 130 ms_handle_reset con 0x55ab290a2400 session 0x55ab29447680
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.400087357s of 10.776869774s, submitted: 38
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 131 heartbeat osd_stat(store_statfs(0x4fa1ba000/0x0/0x4ffc00000, data 0x299afbb/0x2a63000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26d94800
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab2754cc00
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70877184 unmapped: 18079744 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:33.353341+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 131 handle_osd_map epochs [131,132], i have 131, src has [1,132]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 18071552 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:34.353461+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 132 ms_handle_reset con 0x55ab2754cc00 session 0x55ab290925a0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab2754c800
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 132 ms_handle_reset con 0x55ab26d94800 session 0x55ab273dd860
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982998 data_alloc: 218103808 data_used: 188416
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26d94800
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab2754cc00
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 132 handle_osd_map epochs [132,133], i have 132, src has [1,133]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 133 ms_handle_reset con 0x55ab26d94800 session 0x55ab28eb4f00
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 133 ms_handle_reset con 0x55ab2754c800 session 0x55ab29092780
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 16932864 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:35.353631+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 133 ms_handle_reset con 0x55ab2754cc00 session 0x55ab29447a40
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab290a2400
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb9b0000/0x0/0x4ffc00000, data 0x9a0320/0xa6e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 134 ms_handle_reset con 0x55ab290a2400 session 0x55ab287cc5a0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 16883712 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:36.353807+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab290a2c00
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 134 handle_osd_map epochs [134,135], i have 134, src has [1,135]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:37.353982+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 16859136 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 135 ms_handle_reset con 0x55ab290a2c00 session 0x55ab287cc960
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26d94800
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:38.354196+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 16818176 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 136 ms_handle_reset con 0x55ab26d94800 session 0x55ab287cd2c0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc1a6000/0x0/0x4ffc00000, data 0x9a5688/0xa77000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:39.354442+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 16801792 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000265 data_alloc: 218103808 data_used: 221184
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab2754c800
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 136 handle_osd_map epochs [136,137], i have 136, src has [1,137]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:40.354572+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 16793600 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 137 ms_handle_reset con 0x55ab2754c800 session 0x55ab287cc960
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab2754cc00
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:41.354725+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 16752640 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc1a3000/0x0/0x4ffc00000, data 0x9a8e67/0xa7b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc1a3000/0x0/0x4ffc00000, data 0x9a8e67/0xa7b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:42.354855+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 16736256 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 138 ms_handle_reset con 0x55ab2754cc00 session 0x55ab29447a40
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab27571400
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:43.355016+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 16687104 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.387613297s of 10.994990349s, submitted: 180
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 140 ms_handle_reset con 0x55ab27571400 session 0x55ab287cda40
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:44.355172+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 72310784 unmapped: 16646144 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010927 data_alloc: 218103808 data_used: 245760
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab287b9c00
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:45.355301+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 72351744 unmapped: 16605184 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 141 ms_handle_reset con 0x55ab287b9c00 session 0x55ab29578b40
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc199000/0x0/0x4ffc00000, data 0x9ae158/0xa84000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26d94800
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab2754c800
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:46.355445+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 72343552 unmapped: 16613376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 142 ms_handle_reset con 0x55ab2754c800 session 0x55ab290930e0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab2754cc00
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:47.355552+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 15540224 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 143 ms_handle_reset con 0x55ab26d94800 session 0x55ab29578f00
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab27571400
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 143 ms_handle_reset con 0x55ab27571400 session 0x55ab295792c0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 143 ms_handle_reset con 0x55ab2754cc00 session 0x55ab29093860
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:48.355763+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 15491072 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fc18e000/0x0/0x4ffc00000, data 0x9b4b8b/0xa8d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:49.355976+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 15491072 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1023612 data_alloc: 218103808 data_used: 258048
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fc18e000/0x0/0x4ffc00000, data 0x9b4b8b/0xa8d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fc18e000/0x0/0x4ffc00000, data 0x9b4b8b/0xa8d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:50.356225+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 15441920 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:51.356430+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 15441920 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:52.356659+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 15441920 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fc18e000/0x0/0x4ffc00000, data 0x9b4b8b/0xa8d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 144 handle_osd_map epochs [144,145], i have 144, src has [1,145]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:53.356839+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 15433728 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:54.356985+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 15433728 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1025050 data_alloc: 218103808 data_used: 258048
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab293cac00
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 145 ms_handle_reset con 0x55ab293cac00 session 0x55ab295794a0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:55.357096+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 15433728 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab293cac00
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 145 ms_handle_reset con 0x55ab293cac00 session 0x55ab29579860
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:56.357246+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 15433728 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26d94800
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 145 ms_handle_reset con 0x55ab26d94800 session 0x55ab29579a40
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab2754c800
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fc18d000/0x0/0x4ffc00000, data 0x9b6686/0xa90000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:57.357370+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 15433728 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:58.357578+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 15433728 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 145 handle_osd_map epochs [147,147], i have 145, src has [1,147]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 145 handle_osd_map epochs [146,147], i have 145, src has [1,147]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.108797073s of 14.427786827s, submitted: 106
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 147 ms_handle_reset con 0x55ab2754c800 session 0x55ab29579e00
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab2754cc00
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:59.357725+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 14057472 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1039723 data_alloc: 218103808 data_used: 262144
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:00.357968+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 14057472 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:01.358143+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 14057472 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab27571400
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fc161000/0x0/0x4ffc00000, data 0x9dde07/0xabc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:02.358333+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 14049280 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab28b6a400
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 148 ms_handle_reset con 0x55ab28b6a400 session 0x55ab29447e00
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 148 ms_handle_reset con 0x55ab27571400 session 0x55ab291141e0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26d94800
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:03.358479+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 12992512 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 149 ms_handle_reset con 0x55ab26d94800 session 0x55ab291145a0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab2754c800
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 149 ms_handle_reset con 0x55ab2754c800 session 0x55ab29114960
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab27571400
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:04.359009+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 12984320 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 150 ms_handle_reset con 0x55ab27571400 session 0x55ab29114b40
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049693 data_alloc: 218103808 data_used: 270336
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab28b6a400
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:05.359157+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 12926976 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:06.359615+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 151 ms_handle_reset con 0x55ab28b6a400 session 0x55ab291150e0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 12918784 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab293cac00
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:07.359742+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 12910592 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 151 handle_osd_map epochs [151,152], i have 151, src has [1,152]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 152 ms_handle_reset con 0x55ab293cac00 session 0x55ab29115860
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fc153000/0x0/0x4ffc00000, data 0x9e4d23/0xac9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:08.359986+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 12959744 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:09.360159+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 12959744 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058405 data_alloc: 218103808 data_used: 278528
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fc14f000/0x0/0x4ffc00000, data 0x9e68d8/0xacc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:10.360326+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 12926976 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fc14f000/0x0/0x4ffc00000, data 0x9e68d8/0xacc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:11.360452+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 12926976 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:12.360589+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 12926976 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26d94800
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 152 ms_handle_reset con 0x55ab26d94800 session 0x55ab29115c20
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab2754c800
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 152 ms_handle_reset con 0x55ab2754c800 session 0x55ab29115e00
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab27571400
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 152 ms_handle_reset con 0x55ab27571400 session 0x55ab29114000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab28b6a400
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 152 ms_handle_reset con 0x55ab28b6a400 session 0x55ab291141e0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26a4f800
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.032873154s of 14.419190407s, submitted: 76
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 152 ms_handle_reset con 0x55ab26a4f800 session 0x55ab29447e00
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26d94800
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 153 ms_handle_reset con 0x55ab26d94800 session 0x55ab295794a0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab2754c800
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 153 ms_handle_reset con 0x55ab2754c800 session 0x55ab29579a40
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab27571400
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 153 ms_handle_reset con 0x55ab27571400 session 0x55ab29579e00
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab28b6a400
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 153 ms_handle_reset con 0x55ab28b6a400 session 0x55ab29093860
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:13.360712+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 11862016 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26a4e000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 153 ms_handle_reset con 0x55ab26a4e000 session 0x55ab287cc3c0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26a4e000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 153 ms_handle_reset con 0x55ab26a4e000 session 0x55ab28378000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:14.360860+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 11862016 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26d94800
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 153 ms_handle_reset con 0x55ab26d94800 session 0x55ab2783c000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab2754c800
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 153 ms_handle_reset con 0x55ab2754c800 session 0x55ab29447e00
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fc14d000/0x0/0x4ffc00000, data 0x9e8383/0xad0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1064614 data_alloc: 218103808 data_used: 286720
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab27571400
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab28b6a400
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:15.360973+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 12910592 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:16.361111+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 12910592 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26a4f400
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 153 ms_handle_reset con 0x55ab26a4f400 session 0x55ab290930e0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab290a3c00
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:17.361247+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 12910592 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 153 ms_handle_reset con 0x55ab290a3c00 session 0x55ab287cda40
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab290a3c00
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26a4e000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 154 ms_handle_reset con 0x55ab26a4e000 session 0x55ab29115c20
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 154 ms_handle_reset con 0x55ab290a3c00 session 0x55ab29447a40
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26a4f400
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 154 ms_handle_reset con 0x55ab26a4f400 session 0x55ab287cc3c0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26d94800
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:18.361457+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 12902400 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 154 handle_osd_map epochs [154,155], i have 154, src has [1,155]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 155 ms_handle_reset con 0x55ab26d94800 session 0x55ab287cda40
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab2754c800
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:19.361598+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fc142000/0x0/0x4ffc00000, data 0x9ebf46/0xada000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 12869632 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076395 data_alloc: 218103808 data_used: 311296
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 156 ms_handle_reset con 0x55ab2754c800 session 0x55ab29093860
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:20.361748+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 12861440 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:21.361976+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 12861440 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab2754c800
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:22.362152+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 12861440 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 156 ms_handle_reset con 0x55ab2754c800 session 0x55ab29447e00
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26a4e000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.323160172s of 10.468074799s, submitted: 55
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:23.362270+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 12861440 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 156 handle_osd_map epochs [156,157], i have 156, src has [1,157]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:24.362376+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 157 ms_handle_reset con 0x55ab26a4e000 session 0x55ab28378000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 12812288 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079914 data_alloc: 218103808 data_used: 327680
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:25.362510+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 157 heartbeat osd_stat(store_statfs(0x4fc13f000/0x0/0x4ffc00000, data 0x9ef27a/0xadd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 12812288 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 157 ms_handle_reset con 0x55ab27571400 session 0x55ab291150e0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 157 ms_handle_reset con 0x55ab28b6a400 session 0x55ab29106000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 157 heartbeat osd_stat(store_statfs(0x4fc13f000/0x0/0x4ffc00000, data 0x9ef27a/0xadd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26a4f400
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:26.362666+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 12779520 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 157 handle_osd_map epochs [157,158], i have 157, src has [1,158]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 158 ms_handle_reset con 0x55ab26a4f400 session 0x55ab29622000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:27.362803+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 12738560 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:28.362994+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 12738560 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc13f000/0x0/0x4ffc00000, data 0x9f0e24/0xadd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:29.363173+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 12738560 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084363 data_alloc: 218103808 data_used: 327680
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:30.363355+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 12738560 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 159 ms_handle_reset con 0x55ab2754cc00 session 0x55ab295792c0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26a4e000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 159 ms_handle_reset con 0x55ab26a4e000 session 0x55ab2960d2c0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:31.363516+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 12730368 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:32.363725+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab2754c800
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 159 ms_handle_reset con 0x55ab2754c800 session 0x55ab2960dc20
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab27571400
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 12730368 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 160 ms_handle_reset con 0x55ab27571400 session 0x55ab2960de00
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:33.363944+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 12730368 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc161000/0x0/0x4ffc00000, data 0x9d0437/0xabc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:34.364200+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 12730368 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083630 data_alloc: 218103808 data_used: 323584
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:35.364405+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 12730368 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:36.364591+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 12730368 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc161000/0x0/0x4ffc00000, data 0x9d0437/0xabc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:37.364754+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 12730368 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:38.365003+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 12730368 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:39.365137+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 12730368 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083630 data_alloc: 218103808 data_used: 323584
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:40.365247+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 12730368 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc161000/0x0/0x4ffc00000, data 0x9d0437/0xabc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 160 handle_osd_map epochs [161,161], i have 161, src has [1,161]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.256870270s of 17.725013733s, submitted: 102
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:41.365359+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 12730368 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:42.365465+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 12730368 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:43.365599+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 12730368 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:44.365731+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 12730368 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:45.365879+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 12730368 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:46.366126+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 12730368 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:47.366288+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 12730368 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:48.366491+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 12730368 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:49.366623+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 12713984 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:50.366831+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 12713984 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:51.367031+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 12713984 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:52.367204+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 12713984 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:53.367363+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 12713984 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:54.367501+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 12713984 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:55.367652+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 12713984 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:56.367818+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 12713984 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:57.367957+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 12713984 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:58.368181+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 12713984 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:59.368359+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 12713984 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:00.368572+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 12713984 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:01.368760+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 12713984 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:02.368932+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 12713984 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:03.369167+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 12713984 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.2 total, 600.0 interval
                                           Cumulative writes: 6867 writes, 27K keys, 6867 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6867 writes, 1384 syncs, 4.96 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1285 writes, 3543 keys, 1285 commit groups, 1.0 writes per commit group, ingest: 1.95 MB, 0.00 MB/s
                                           Interval WAL: 1285 writes, 527 syncs, 2.44 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:04.369350+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 12713984 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:05.369528+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 12713984 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:06.369787+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 12713984 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:07.369942+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 12713984 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:08.370157+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 12713984 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:09.370304+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 12697600 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:10.370473+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 12697600 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:11.370652+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 12697600 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:12.370817+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 12697600 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:13.371055+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: mgrc ms_handle_reset ms_handle_reset con 0x55ab27959800
Nov 24 18:51:58 compute-0 ceph-osd[88544]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/536471675
Nov 24 18:51:58 compute-0 ceph-osd[88544]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/536471675,v1:192.168.122.100:6801/536471675]
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: get_auth_request con 0x55ab26a4f800 auth_method 0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: mgrc handle_mgr_configure stats_period=5
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 12550144 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:14.371252+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 12550144 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:15.371420+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 12550144 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:16.371587+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 12550144 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:17.371724+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 12550144 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:18.371953+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 12550144 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:19.372090+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 12550144 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:20.372240+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 161 ms_handle_reset con 0x55ab26292400 session 0x55ab25fd94a0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab2754c400
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 12550144 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:21.372404+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 12550144 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:22.372601+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 12541952 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:23.372747+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 12541952 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:24.372967+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 12541952 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:25.373150+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 12541952 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:26.373363+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 12541952 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:27.373531+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 12541952 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:28.373726+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 12541952 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:29.373859+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 12541952 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:30.373993+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 12541952 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:31.374147+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 12541952 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:32.374314+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 12541952 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:33.374487+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:34.374670+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:35.374866+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:36.375195+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:37.375389+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:38.375560+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:39.375752+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:40.375938+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:41.376106+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:42.376258+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:43.376402+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:44.376586+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:45.376710+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:46.376868+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:47.376967+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:48.377119+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:49.377318+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:50.377438+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:51.377595+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:52.377718+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:53.377859+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:54.378006+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:55.378135+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:56.378288+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:57.378436+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:58.378631+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:59.378795+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:00.378978+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:01.379159+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:02.379348+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:03.379525+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:04.379685+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:05.379868+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:06.380010+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:07.380128+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:08.380289+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:09.380441+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:10.380618+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:11.380805+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:12.380988+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:13.381108+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:14.382712+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:15.382845+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:16.382956+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:17.383066+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:18.383208+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:19.383337+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:20.383456+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:21.383584+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:22.383696+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 161 ms_handle_reset con 0x55ab26d95c00 session 0x55ab26b38000
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26292400
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:23.383854+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:24.383946+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:51:58 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:51:58 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:25.384065+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 12402688 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: do_command 'config diff' '{prefix=config diff}'
Nov 24 18:51:58 compute-0 ceph-osd[88544]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 24 18:51:58 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:51:58 compute-0 ceph-osd[88544]: do_command 'config show' '{prefix=config show}'
Nov 24 18:51:58 compute-0 ceph-osd[88544]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 24 18:51:58 compute-0 ceph-osd[88544]: do_command 'counter dump' '{prefix=counter dump}'
Nov 24 18:51:58 compute-0 ceph-osd[88544]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:26.384178+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: do_command 'counter schema' '{prefix=counter schema}'
Nov 24 18:51:58 compute-0 ceph-osd[88544]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 11878400 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:51:58 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:27.384293+0000)
Nov 24 18:51:58 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 11796480 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:51:58 compute-0 ceph-osd[88544]: do_command 'log dump' '{prefix=log dump}'
Nov 24 18:51:58 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 18:51:58 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 18:51:58 compute-0 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 18:51:58 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0) v1
Nov 24 18:51:58 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3065529142' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 24 18:51:59 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14895 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:51:59 compute-0 ceph-mon[74927]: pgmap v1123: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:59 compute-0 ceph-mon[74927]: from='client.14883 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:51:59 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1022376122' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 24 18:51:59 compute-0 ceph-mon[74927]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 18:51:59 compute-0 ceph-mon[74927]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 18:51:59 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3065529142' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 24 18:51:59 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1124: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:51:59 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Nov 24 18:51:59 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1431330591' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 24 18:52:00 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0) v1
Nov 24 18:52:00 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2922274783' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 24 18:52:00 compute-0 ceph-mon[74927]: from='client.14895 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:52:00 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1431330591' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 24 18:52:00 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2922274783' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 24 18:52:00 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Nov 24 18:52:00 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3529921539' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 24 18:52:01 compute-0 systemd[1]: Starting Hostname Service...
Nov 24 18:52:01 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Nov 24 18:52:01 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1272243041' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 24 18:52:01 compute-0 systemd[1]: Started Hostname Service.
Nov 24 18:52:01 compute-0 ceph-mon[74927]: pgmap v1124: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:01 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3529921539' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 24 18:52:01 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1272243041' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 24 18:52:01 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14905 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:52:01 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1125: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Nov 24 18:52:02 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3523787368' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 24 18:52:02 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3523787368' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 24 18:52:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Nov 24 18:52:02 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2134113622' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 24 18:52:02 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14911 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:52:02 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:52:03 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Nov 24 18:52:03 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3285648746' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 24 18:52:03 compute-0 ceph-mon[74927]: from='client.14905 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:52:03 compute-0 ceph-mon[74927]: pgmap v1125: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:03 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2134113622' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 24 18:52:03 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3285648746' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 24 18:52:03 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14915 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:52:03 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1126: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Nov 24 18:52:03 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14917 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:52:04 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Nov 24 18:52:04 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1220246040' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 24 18:52:04 compute-0 ceph-mon[74927]: from='client.14911 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:52:04 compute-0 ceph-mon[74927]: from='client.14915 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:52:04 compute-0 ceph-mon[74927]: pgmap v1126: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Nov 24 18:52:04 compute-0 ceph-mon[74927]: from='client.14917 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:52:04 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1220246040' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 24 18:52:04 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Nov 24 18:52:04 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1347811353' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 24 18:52:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:52:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:52:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:52:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:52:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:52:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:52:04 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14923 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:52:05 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14925 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:52:05 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:52:05 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:52:05 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:52:05 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 18:52:05 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:52:05 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 18:52:05 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:52:05 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:52:05 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:52:05 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 18:52:05 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:52:05 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 18:52:05 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:52:05 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:52:05 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:52:05 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 18:52:05 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:52:05 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 18:52:05 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:52:05 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:52:05 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:52:05 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 18:52:05 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Nov 24 18:52:05 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1777388610' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 24 18:52:05 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1347811353' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 24 18:52:05 compute-0 ceph-mon[74927]: from='client.14923 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:52:05 compute-0 ceph-mon[74927]: from='client.14925 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:52:05 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1777388610' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 24 18:52:05 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1127: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 18:52:05 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Nov 24 18:52:05 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/38018722' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 24 18:52:06 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14931 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:52:06 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14933 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:52:06 compute-0 ceph-mon[74927]: pgmap v1127: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 18:52:06 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/38018722' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 24 18:52:06 compute-0 ceph-mon[74927]: from='client.14931 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:52:06 compute-0 ceph-mon[74927]: from='client.14933 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:52:06 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 24 18:52:06 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2332352452' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 18:52:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Nov 24 18:52:07 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/669115039' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 24 18:52:07 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2332352452' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 18:52:07 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/669115039' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 24 18:52:07 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1128: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 18:52:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Nov 24 18:52:07 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1729651015' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 24 18:52:07 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:52:08 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14941 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:52:08 compute-0 ovs-appctl[287479]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 24 18:52:08 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 24 18:52:08 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3692067416' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 18:52:08 compute-0 ovs-appctl[287488]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 24 18:52:08 compute-0 ovs-appctl[287495]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 24 18:52:08 compute-0 ceph-mon[74927]: pgmap v1128: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 18:52:08 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1729651015' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 24 18:52:08 compute-0 ceph-mon[74927]: from='client.14941 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:52:08 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3692067416' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 18:52:08 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0) v1
Nov 24 18:52:08 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1528214084' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 24 18:52:09 compute-0 podman[287724]: 2025-11-24 18:52:09.124077388 +0000 UTC m=+0.072353079 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 18:52:09 compute-0 podman[287713]: 2025-11-24 18:52:09.133797507 +0000 UTC m=+0.082347784 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 24 18:52:09 compute-0 podman[287726]: 2025-11-24 18:52:09.159574771 +0000 UTC m=+0.107261437 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 24 18:52:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0) v1
Nov 24 18:52:09 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1438931941' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 24 18:52:09 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1528214084' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 24 18:52:09 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1438931941' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 24 18:52:09 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1129: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 18:52:09 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0) v1
Nov 24 18:52:09 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/841362417' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 24 18:52:10 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14951 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:52:10 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0) v1
Nov 24 18:52:10 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2635190399' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 24 18:52:10 compute-0 ceph-mon[74927]: pgmap v1129: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 18:52:10 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/841362417' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 24 18:52:10 compute-0 ceph-mon[74927]: from='client.14951 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:52:10 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2635190399' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 24 18:52:10 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0) v1
Nov 24 18:52:10 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/467331363' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 24 18:52:11 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14957 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:52:11 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/467331363' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 24 18:52:11 compute-0 ceph-mon[74927]: from='client.14957 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:52:11 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0) v1
Nov 24 18:52:11 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2731539918' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 24 18:52:11 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1130: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 18:52:12 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14961 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:52:12 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14963 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:52:12 compute-0 nova_compute[270693]: 2025-11-24 18:52:12.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:52:12 compute-0 nova_compute[270693]: 2025-11-24 18:52:12.529 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 18:52:12 compute-0 nova_compute[270693]: 2025-11-24 18:52:12.529 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 18:52:12 compute-0 nova_compute[270693]: 2025-11-24 18:52:12.558 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 18:52:12 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2731539918' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 24 18:52:12 compute-0 ceph-mon[74927]: pgmap v1130: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 18:52:12 compute-0 ceph-mon[74927]: from='client.14961 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:52:12 compute-0 ceph-mon[74927]: from='client.14963 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:52:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0) v1
Nov 24 18:52:12 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3709281775' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 24 18:52:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:52:13 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0) v1
Nov 24 18:52:13 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/341538310' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 24 18:52:13 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14969 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:52:13 compute-0 nova_compute[270693]: 2025-11-24 18:52:13.528 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:52:13 compute-0 nova_compute[270693]: 2025-11-24 18:52:13.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:52:13 compute-0 nova_compute[270693]: 2025-11-24 18:52:13.559 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:52:13 compute-0 nova_compute[270693]: 2025-11-24 18:52:13.560 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:52:13 compute-0 nova_compute[270693]: 2025-11-24 18:52:13.560 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:52:13 compute-0 nova_compute[270693]: 2025-11-24 18:52:13.561 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 18:52:13 compute-0 nova_compute[270693]: 2025-11-24 18:52:13.561 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:52:13 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3709281775' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 24 18:52:13 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/341538310' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 24 18:52:13 compute-0 ceph-mon[74927]: from='client.14969 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:52:13 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1131: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 18:52:13 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14971 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:52:13 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:52:13 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:52:13 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:52:13 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 18:52:13 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:52:13 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 18:52:13 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:52:13 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:52:13 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:52:13 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 18:52:13 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:52:13 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 18:52:13 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:52:13 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:52:13 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:52:13 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 18:52:13 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:52:13 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 18:52:13 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:52:13 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:52:13 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:52:13 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 18:52:14 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 18:52:14 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/720096219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:52:14 compute-0 nova_compute[270693]: 2025-11-24 18:52:14.055 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:52:14 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 24 18:52:14 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4235559438' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 18:52:14 compute-0 nova_compute[270693]: 2025-11-24 18:52:14.232 270697 WARNING nova.virt.libvirt.driver [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 18:52:14 compute-0 nova_compute[270693]: 2025-11-24 18:52:14.234 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4915MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 18:52:14 compute-0 nova_compute[270693]: 2025-11-24 18:52:14.234 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:52:14 compute-0 nova_compute[270693]: 2025-11-24 18:52:14.235 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:52:14 compute-0 nova_compute[270693]: 2025-11-24 18:52:14.323 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 18:52:14 compute-0 nova_compute[270693]: 2025-11-24 18:52:14.323 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 18:52:14 compute-0 nova_compute[270693]: 2025-11-24 18:52:14.340 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:52:14 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0) v1
Nov 24 18:52:14 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1156185045' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 24 18:52:14 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 18:52:14 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/90068418' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:52:14 compute-0 ceph-mon[74927]: pgmap v1131: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 18:52:14 compute-0 ceph-mon[74927]: from='client.14971 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:52:14 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/720096219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:52:14 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/4235559438' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 18:52:14 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1156185045' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 24 18:52:14 compute-0 nova_compute[270693]: 2025-11-24 18:52:14.740 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:52:14 compute-0 nova_compute[270693]: 2025-11-24 18:52:14.745 270697 DEBUG nova.compute.provider_tree [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed in ProviderTree for provider: d1cce7ec-de83-4810-91f8-1852891da8a6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 18:52:14 compute-0 nova_compute[270693]: 2025-11-24 18:52:14.770 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed for provider d1cce7ec-de83-4810-91f8-1852891da8a6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 18:52:14 compute-0 nova_compute[270693]: 2025-11-24 18:52:14.771 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 18:52:14 compute-0 nova_compute[270693]: 2025-11-24 18:52:14.772 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.537s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:52:14 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14981 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:52:15 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14983 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:52:15 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 24 18:52:15 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1466566774' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 18:52:15 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1132: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Nov 24 18:52:15 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/90068418' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:52:15 compute-0 ceph-mon[74927]: from='client.14981 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:52:15 compute-0 ceph-mon[74927]: from='client.14983 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:52:15 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1466566774' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 18:52:16 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0) v1
Nov 24 18:52:16 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3492418829' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 24 18:52:16 compute-0 ceph-mon[74927]: pgmap v1132: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Nov 24 18:52:16 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3492418829' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 24 18:52:16 compute-0 nova_compute[270693]: 2025-11-24 18:52:16.771 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:52:16 compute-0 nova_compute[270693]: 2025-11-24 18:52:16.772 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:52:16 compute-0 nova_compute[270693]: 2025-11-24 18:52:16.772 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:52:16 compute-0 nova_compute[270693]: 2025-11-24 18:52:16.772 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:52:16 compute-0 nova_compute[270693]: 2025-11-24 18:52:16.772 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:52:16 compute-0 nova_compute[270693]: 2025-11-24 18:52:16.772 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 18:52:17 compute-0 virtqemud[270425]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 24 18:52:17 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1133: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:52:18 compute-0 nova_compute[270693]: 2025-11-24 18:52:18.528 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:52:18 compute-0 ceph-mon[74927]: pgmap v1133: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 18:52:19 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2792081622' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 18:52:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 18:52:19 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2792081622' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 18:52:19 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1134: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:19 compute-0 systemd[1]: Starting Time & Date Service...
Nov 24 18:52:19 compute-0 ceph-mon[74927]: from='client.? 192.168.122.10:0/2792081622' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 18:52:19 compute-0 ceph-mon[74927]: from='client.? 192.168.122.10:0/2792081622' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 18:52:19 compute-0 systemd[1]: Started Time & Date Service.
Nov 24 18:52:20 compute-0 ceph-mon[74927]: pgmap v1134: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:21 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1135: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:52:22.746 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:52:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:52:22.747 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:52:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:52:22.747 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:52:22 compute-0 ceph-mon[74927]: pgmap v1135: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:22 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:52:23 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1136: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:24 compute-0 ceph-mon[74927]: pgmap v1136: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:25 compute-0 sudo[289617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:52:25 compute-0 sudo[289617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:52:25 compute-0 sudo[289617]: pam_unix(sudo:session): session closed for user root
Nov 24 18:52:25 compute-0 sudo[289642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:52:25 compute-0 sudo[289642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:52:25 compute-0 sudo[289642]: pam_unix(sudo:session): session closed for user root
Nov 24 18:52:25 compute-0 sudo[289667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:52:25 compute-0 sudo[289667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:52:25 compute-0 sudo[289667]: pam_unix(sudo:session): session closed for user root
Nov 24 18:52:25 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1137: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:25 compute-0 sudo[289692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 24 18:52:25 compute-0 sudo[289692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:52:26 compute-0 podman[289788]: 2025-11-24 18:52:26.177483341 +0000 UTC m=+0.068460173 container exec 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:52:26 compute-0 podman[289788]: 2025-11-24 18:52:26.270191069 +0000 UTC m=+0.161167881 container exec_died 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 24 18:52:26 compute-0 ceph-mon[74927]: pgmap v1137: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:26 compute-0 sudo[289692]: pam_unix(sudo:session): session closed for user root
Nov 24 18:52:26 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:52:26 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:52:26 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:52:26 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:52:26 compute-0 sudo[289948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:52:26 compute-0 sudo[289948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:52:26 compute-0 sudo[289948]: pam_unix(sudo:session): session closed for user root
Nov 24 18:52:26 compute-0 sudo[289973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:52:26 compute-0 sudo[289973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:52:26 compute-0 sudo[289973]: pam_unix(sudo:session): session closed for user root
Nov 24 18:52:26 compute-0 sudo[289998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:52:26 compute-0 sudo[289998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:52:26 compute-0 sudo[289998]: pam_unix(sudo:session): session closed for user root
Nov 24 18:52:27 compute-0 sudo[290023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 18:52:27 compute-0 sudo[290023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:52:27 compute-0 sudo[290023]: pam_unix(sudo:session): session closed for user root
Nov 24 18:52:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:52:27 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:52:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:52:27 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:52:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:52:27 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:52:27 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 52c2b80e-d4d9-4da7-9182-cf150f3d7f2f does not exist
Nov 24 18:52:27 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 56ed3fb8-b56e-42d3-b4f1-04fa28ac4e5d does not exist
Nov 24 18:52:27 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 74d4e4fb-b1ce-4e88-866b-e20537fffb01 does not exist
Nov 24 18:52:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 18:52:27 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:52:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 18:52:27 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:52:27 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:52:27 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:52:27 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1138: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:27 compute-0 sudo[290078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:52:27 compute-0 sudo[290078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:52:27 compute-0 sudo[290078]: pam_unix(sudo:session): session closed for user root
Nov 24 18:52:27 compute-0 sudo[290103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:52:27 compute-0 sudo[290103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:52:27 compute-0 sudo[290103]: pam_unix(sudo:session): session closed for user root
Nov 24 18:52:27 compute-0 sudo[290128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:52:27 compute-0 sudo[290128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:52:27 compute-0 sudo[290128]: pam_unix(sudo:session): session closed for user root
Nov 24 18:52:27 compute-0 sudo[290153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 18:52:27 compute-0 sudo[290153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:52:28 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:52:28 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:52:28 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:52:28 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:52:28 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:52:28 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:52:28 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:52:28 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:52:28 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:52:28 compute-0 podman[290220]: 2025-11-24 18:52:28.280944427 +0000 UTC m=+0.057202576 container create 908c2748b5fe21268da8943364c651f51cd7721be93c36377bde22a030b4bac2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 18:52:28 compute-0 systemd[1]: Started libpod-conmon-908c2748b5fe21268da8943364c651f51cd7721be93c36377bde22a030b4bac2.scope.
Nov 24 18:52:28 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:52:28 compute-0 podman[290220]: 2025-11-24 18:52:28.25989387 +0000 UTC m=+0.036152049 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:52:28 compute-0 podman[290220]: 2025-11-24 18:52:28.368371165 +0000 UTC m=+0.144629324 container init 908c2748b5fe21268da8943364c651f51cd7721be93c36377bde22a030b4bac2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_jones, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:52:28 compute-0 podman[290220]: 2025-11-24 18:52:28.376937056 +0000 UTC m=+0.153195195 container start 908c2748b5fe21268da8943364c651f51cd7721be93c36377bde22a030b4bac2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_jones, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 24 18:52:28 compute-0 systemd[1]: libpod-908c2748b5fe21268da8943364c651f51cd7721be93c36377bde22a030b4bac2.scope: Deactivated successfully.
Nov 24 18:52:28 compute-0 ecstatic_jones[290236]: 167 167
Nov 24 18:52:28 compute-0 podman[290220]: 2025-11-24 18:52:28.382661977 +0000 UTC m=+0.158920166 container attach 908c2748b5fe21268da8943364c651f51cd7721be93c36377bde22a030b4bac2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_jones, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 24 18:52:28 compute-0 conmon[290236]: conmon 908c2748b5fe21268da8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-908c2748b5fe21268da8943364c651f51cd7721be93c36377bde22a030b4bac2.scope/container/memory.events
Nov 24 18:52:28 compute-0 podman[290220]: 2025-11-24 18:52:28.384738818 +0000 UTC m=+0.160996967 container died 908c2748b5fe21268da8943364c651f51cd7721be93c36377bde22a030b4bac2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_jones, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 18:52:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-a89d0ec2ef121f5c9689419cb5cfce45f8365d030871b7a47ddd121aa0c1e5d6-merged.mount: Deactivated successfully.
Nov 24 18:52:28 compute-0 podman[290220]: 2025-11-24 18:52:28.431301972 +0000 UTC m=+0.207560121 container remove 908c2748b5fe21268da8943364c651f51cd7721be93c36377bde22a030b4bac2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:52:28 compute-0 systemd[1]: libpod-conmon-908c2748b5fe21268da8943364c651f51cd7721be93c36377bde22a030b4bac2.scope: Deactivated successfully.
Nov 24 18:52:28 compute-0 podman[290260]: 2025-11-24 18:52:28.6542346 +0000 UTC m=+0.048036932 container create 1d98ecebb1c47fe85e91e2147e223db8edf7983a6add9454d642c51e381a30c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 18:52:28 compute-0 systemd[1]: Started libpod-conmon-1d98ecebb1c47fe85e91e2147e223db8edf7983a6add9454d642c51e381a30c1.scope.
Nov 24 18:52:28 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:52:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2530a4dd5bb50a1888945f286e7d82624f850602ac598cd27240fe6f6ef98b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:52:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2530a4dd5bb50a1888945f286e7d82624f850602ac598cd27240fe6f6ef98b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:52:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2530a4dd5bb50a1888945f286e7d82624f850602ac598cd27240fe6f6ef98b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:52:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2530a4dd5bb50a1888945f286e7d82624f850602ac598cd27240fe6f6ef98b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:52:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2530a4dd5bb50a1888945f286e7d82624f850602ac598cd27240fe6f6ef98b1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:52:28 compute-0 podman[290260]: 2025-11-24 18:52:28.639547049 +0000 UTC m=+0.033349401 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:52:28 compute-0 podman[290260]: 2025-11-24 18:52:28.750848804 +0000 UTC m=+0.144651166 container init 1d98ecebb1c47fe85e91e2147e223db8edf7983a6add9454d642c51e381a30c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_jepsen, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:52:28 compute-0 podman[290260]: 2025-11-24 18:52:28.757001725 +0000 UTC m=+0.150804067 container start 1d98ecebb1c47fe85e91e2147e223db8edf7983a6add9454d642c51e381a30c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 24 18:52:28 compute-0 podman[290260]: 2025-11-24 18:52:28.760712726 +0000 UTC m=+0.154515088 container attach 1d98ecebb1c47fe85e91e2147e223db8edf7983a6add9454d642c51e381a30c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_jepsen, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 18:52:29 compute-0 ceph-mon[74927]: pgmap v1138: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:29 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1139: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:29 compute-0 mystifying_jepsen[290277]: --> passed data devices: 0 physical, 3 LVM
Nov 24 18:52:29 compute-0 mystifying_jepsen[290277]: --> relative data size: 1.0
Nov 24 18:52:29 compute-0 mystifying_jepsen[290277]: --> All data devices are unavailable
Nov 24 18:52:29 compute-0 systemd[1]: libpod-1d98ecebb1c47fe85e91e2147e223db8edf7983a6add9454d642c51e381a30c1.scope: Deactivated successfully.
Nov 24 18:52:29 compute-0 podman[290260]: 2025-11-24 18:52:29.741196897 +0000 UTC m=+1.134999259 container died 1d98ecebb1c47fe85e91e2147e223db8edf7983a6add9454d642c51e381a30c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_jepsen, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 18:52:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2530a4dd5bb50a1888945f286e7d82624f850602ac598cd27240fe6f6ef98b1-merged.mount: Deactivated successfully.
Nov 24 18:52:29 compute-0 podman[290260]: 2025-11-24 18:52:29.827652552 +0000 UTC m=+1.221454894 container remove 1d98ecebb1c47fe85e91e2147e223db8edf7983a6add9454d642c51e381a30c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_jepsen, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:52:29 compute-0 systemd[1]: libpod-conmon-1d98ecebb1c47fe85e91e2147e223db8edf7983a6add9454d642c51e381a30c1.scope: Deactivated successfully.
Nov 24 18:52:29 compute-0 sudo[290153]: pam_unix(sudo:session): session closed for user root
Nov 24 18:52:29 compute-0 sudo[290320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:52:29 compute-0 sudo[290320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:52:29 compute-0 sudo[290320]: pam_unix(sudo:session): session closed for user root
Nov 24 18:52:29 compute-0 sudo[290345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:52:29 compute-0 sudo[290345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:52:30 compute-0 sudo[290345]: pam_unix(sudo:session): session closed for user root
Nov 24 18:52:30 compute-0 sudo[290370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:52:30 compute-0 sudo[290370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:52:30 compute-0 sudo[290370]: pam_unix(sudo:session): session closed for user root
Nov 24 18:52:30 compute-0 sudo[290395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- lvm list --format json
Nov 24 18:52:30 compute-0 sudo[290395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:52:30 compute-0 podman[290460]: 2025-11-24 18:52:30.460976834 +0000 UTC m=+0.064198369 container create 4222671948e1557fd4815823cc9543197ddab469cbbcab20a11a2b2b792f9834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:52:30 compute-0 systemd[1]: Started libpod-conmon-4222671948e1557fd4815823cc9543197ddab469cbbcab20a11a2b2b792f9834.scope.
Nov 24 18:52:30 compute-0 podman[290460]: 2025-11-24 18:52:30.426886696 +0000 UTC m=+0.030108301 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:52:30 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:52:30 compute-0 podman[290460]: 2025-11-24 18:52:30.540448127 +0000 UTC m=+0.143669652 container init 4222671948e1557fd4815823cc9543197ddab469cbbcab20a11a2b2b792f9834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:52:30 compute-0 podman[290460]: 2025-11-24 18:52:30.550680758 +0000 UTC m=+0.153902263 container start 4222671948e1557fd4815823cc9543197ddab469cbbcab20a11a2b2b792f9834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_elbakyan, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:52:30 compute-0 podman[290460]: 2025-11-24 18:52:30.554090012 +0000 UTC m=+0.157311547 container attach 4222671948e1557fd4815823cc9543197ddab469cbbcab20a11a2b2b792f9834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:52:30 compute-0 bold_elbakyan[290476]: 167 167
Nov 24 18:52:30 compute-0 systemd[1]: libpod-4222671948e1557fd4815823cc9543197ddab469cbbcab20a11a2b2b792f9834.scope: Deactivated successfully.
Nov 24 18:52:30 compute-0 podman[290460]: 2025-11-24 18:52:30.559253999 +0000 UTC m=+0.162475544 container died 4222671948e1557fd4815823cc9543197ddab469cbbcab20a11a2b2b792f9834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:52:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-984afd7e706b93229506e180e1089f8e86aab19193a6cc25c41f6ab75f223e9a-merged.mount: Deactivated successfully.
Nov 24 18:52:30 compute-0 podman[290460]: 2025-11-24 18:52:30.607122645 +0000 UTC m=+0.210344150 container remove 4222671948e1557fd4815823cc9543197ddab469cbbcab20a11a2b2b792f9834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:52:30 compute-0 systemd[1]: libpod-conmon-4222671948e1557fd4815823cc9543197ddab469cbbcab20a11a2b2b792f9834.scope: Deactivated successfully.
Nov 24 18:52:30 compute-0 podman[290499]: 2025-11-24 18:52:30.748184091 +0000 UTC m=+0.036793195 container create 0acdbf09e2aed4e651d164ed88ae7645f130b4000988ca28b4f6f8894eb713aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:52:30 compute-0 systemd[1]: Started libpod-conmon-0acdbf09e2aed4e651d164ed88ae7645f130b4000988ca28b4f6f8894eb713aa.scope.
Nov 24 18:52:30 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:52:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7b7ce805eab4b94f60186a7798fa9087a1192a1a3a06629fa701f597695e6a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:52:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7b7ce805eab4b94f60186a7798fa9087a1192a1a3a06629fa701f597695e6a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:52:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7b7ce805eab4b94f60186a7798fa9087a1192a1a3a06629fa701f597695e6a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:52:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7b7ce805eab4b94f60186a7798fa9087a1192a1a3a06629fa701f597695e6a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:52:30 compute-0 podman[290499]: 2025-11-24 18:52:30.729916722 +0000 UTC m=+0.018525836 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:52:30 compute-0 podman[290499]: 2025-11-24 18:52:30.829274824 +0000 UTC m=+0.117883958 container init 0acdbf09e2aed4e651d164ed88ae7645f130b4000988ca28b4f6f8894eb713aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hawking, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:52:30 compute-0 podman[290499]: 2025-11-24 18:52:30.837589758 +0000 UTC m=+0.126198862 container start 0acdbf09e2aed4e651d164ed88ae7645f130b4000988ca28b4f6f8894eb713aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 24 18:52:30 compute-0 podman[290499]: 2025-11-24 18:52:30.841385521 +0000 UTC m=+0.129994625 container attach 0acdbf09e2aed4e651d164ed88ae7645f130b4000988ca28b4f6f8894eb713aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hawking, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:52:31 compute-0 ceph-mon[74927]: pgmap v1139: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:31 compute-0 lucid_hawking[290516]: {
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:     "0": [
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:         {
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:             "devices": [
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:                 "/dev/loop3"
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:             ],
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:             "lv_name": "ceph_lv0",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:             "lv_size": "21470642176",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:             "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:             "name": "ceph_lv0",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:             "tags": {
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:                 "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:                 "ceph.cluster_name": "ceph",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:                 "ceph.crush_device_class": "",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:                 "ceph.encrypted": "0",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:                 "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:                 "ceph.osd_id": "0",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:                 "ceph.type": "block",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:                 "ceph.vdo": "0"
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:             },
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:             "type": "block",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:             "vg_name": "ceph_vg0"
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:         }
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:     ],
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:     "1": [
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:         {
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:             "devices": [
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:                 "/dev/loop4"
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:             ],
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:             "lv_name": "ceph_lv1",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:             "lv_size": "21470642176",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:             "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:             "name": "ceph_lv1",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:             "tags": {
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:                 "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:                 "ceph.cluster_name": "ceph",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:                 "ceph.crush_device_class": "",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:                 "ceph.encrypted": "0",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:                 "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:                 "ceph.osd_id": "1",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:                 "ceph.type": "block",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:                 "ceph.vdo": "0"
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:             },
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:             "type": "block",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:             "vg_name": "ceph_vg1"
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:         }
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:     ],
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:     "2": [
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:         {
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:             "devices": [
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:                 "/dev/loop5"
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:             ],
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:             "lv_name": "ceph_lv2",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:             "lv_size": "21470642176",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:             "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:             "name": "ceph_lv2",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:             "tags": {
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:                 "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:                 "ceph.cluster_name": "ceph",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:                 "ceph.crush_device_class": "",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:                 "ceph.encrypted": "0",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:                 "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:                 "ceph.osd_id": "2",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:                 "ceph.type": "block",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:                 "ceph.vdo": "0"
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:             },
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:             "type": "block",
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:             "vg_name": "ceph_vg2"
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:         }
Nov 24 18:52:31 compute-0 lucid_hawking[290516]:     ]
Nov 24 18:52:31 compute-0 lucid_hawking[290516]: }
Nov 24 18:52:31 compute-0 systemd[1]: libpod-0acdbf09e2aed4e651d164ed88ae7645f130b4000988ca28b4f6f8894eb713aa.scope: Deactivated successfully.
Nov 24 18:52:31 compute-0 podman[290499]: 2025-11-24 18:52:31.549014709 +0000 UTC m=+0.837623853 container died 0acdbf09e2aed4e651d164ed88ae7645f130b4000988ca28b4f6f8894eb713aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hawking, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 18:52:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7b7ce805eab4b94f60186a7798fa9087a1192a1a3a06629fa701f597695e6a8-merged.mount: Deactivated successfully.
Nov 24 18:52:31 compute-0 podman[290499]: 2025-11-24 18:52:31.619837069 +0000 UTC m=+0.908446183 container remove 0acdbf09e2aed4e651d164ed88ae7645f130b4000988ca28b4f6f8894eb713aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hawking, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 18:52:31 compute-0 systemd[1]: libpod-conmon-0acdbf09e2aed4e651d164ed88ae7645f130b4000988ca28b4f6f8894eb713aa.scope: Deactivated successfully.
Nov 24 18:52:31 compute-0 sudo[290395]: pam_unix(sudo:session): session closed for user root
Nov 24 18:52:31 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1140: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:31 compute-0 sudo[290537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:52:31 compute-0 sudo[290537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:52:31 compute-0 sudo[290537]: pam_unix(sudo:session): session closed for user root
Nov 24 18:52:31 compute-0 sudo[290562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:52:31 compute-0 sudo[290562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:52:31 compute-0 sudo[290562]: pam_unix(sudo:session): session closed for user root
Nov 24 18:52:31 compute-0 sudo[290587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:52:31 compute-0 sudo[290587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:52:31 compute-0 sudo[290587]: pam_unix(sudo:session): session closed for user root
Nov 24 18:52:31 compute-0 sudo[290612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- raw list --format json
Nov 24 18:52:31 compute-0 sudo[290612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:52:32 compute-0 podman[290677]: 2025-11-24 18:52:32.234656797 +0000 UTC m=+0.052644925 container create 688c2f84aef8071853f9a461607c7fc5257a6331b58fcf2ef5e290ce5405299b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jepsen, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:52:32 compute-0 systemd[1]: Started libpod-conmon-688c2f84aef8071853f9a461607c7fc5257a6331b58fcf2ef5e290ce5405299b.scope.
Nov 24 18:52:32 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:52:32 compute-0 podman[290677]: 2025-11-24 18:52:32.213604129 +0000 UTC m=+0.031592277 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:52:32 compute-0 podman[290677]: 2025-11-24 18:52:32.323965021 +0000 UTC m=+0.141953219 container init 688c2f84aef8071853f9a461607c7fc5257a6331b58fcf2ef5e290ce5405299b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jepsen, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 24 18:52:32 compute-0 podman[290677]: 2025-11-24 18:52:32.333235209 +0000 UTC m=+0.151223337 container start 688c2f84aef8071853f9a461607c7fc5257a6331b58fcf2ef5e290ce5405299b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:52:32 compute-0 podman[290677]: 2025-11-24 18:52:32.337022882 +0000 UTC m=+0.155011100 container attach 688c2f84aef8071853f9a461607c7fc5257a6331b58fcf2ef5e290ce5405299b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jepsen, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 24 18:52:32 compute-0 romantic_jepsen[290694]: 167 167
Nov 24 18:52:32 compute-0 systemd[1]: libpod-688c2f84aef8071853f9a461607c7fc5257a6331b58fcf2ef5e290ce5405299b.scope: Deactivated successfully.
Nov 24 18:52:32 compute-0 podman[290677]: 2025-11-24 18:52:32.34018951 +0000 UTC m=+0.158177648 container died 688c2f84aef8071853f9a461607c7fc5257a6331b58fcf2ef5e290ce5405299b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jepsen, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:52:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-e32287e314bdca0bde1e1a3d6f36bd93081babf2b4a5e6dff77ea1038b81d550-merged.mount: Deactivated successfully.
Nov 24 18:52:32 compute-0 podman[290677]: 2025-11-24 18:52:32.377121177 +0000 UTC m=+0.195109305 container remove 688c2f84aef8071853f9a461607c7fc5257a6331b58fcf2ef5e290ce5405299b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jepsen, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 18:52:32 compute-0 systemd[1]: libpod-conmon-688c2f84aef8071853f9a461607c7fc5257a6331b58fcf2ef5e290ce5405299b.scope: Deactivated successfully.
Nov 24 18:52:32 compute-0 podman[290717]: 2025-11-24 18:52:32.564556003 +0000 UTC m=+0.047076388 container create 1322c698d51477ecf59cbbbcaf5437b3184abf419fe39ec593685f46a72bf034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:52:32 compute-0 systemd[1]: Started libpod-conmon-1322c698d51477ecf59cbbbcaf5437b3184abf419fe39ec593685f46a72bf034.scope.
Nov 24 18:52:32 compute-0 podman[290717]: 2025-11-24 18:52:32.543445924 +0000 UTC m=+0.025966299 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:52:32 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:52:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fdc52a8dafe7f3466f8cf63eccc35dac25ff6b17e317be9120c31d551e4c081/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:52:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fdc52a8dafe7f3466f8cf63eccc35dac25ff6b17e317be9120c31d551e4c081/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:52:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fdc52a8dafe7f3466f8cf63eccc35dac25ff6b17e317be9120c31d551e4c081/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:52:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fdc52a8dafe7f3466f8cf63eccc35dac25ff6b17e317be9120c31d551e4c081/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:52:32 compute-0 podman[290717]: 2025-11-24 18:52:32.65761087 +0000 UTC m=+0.140131265 container init 1322c698d51477ecf59cbbbcaf5437b3184abf419fe39ec593685f46a72bf034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_blackburn, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 24 18:52:32 compute-0 podman[290717]: 2025-11-24 18:52:32.670055385 +0000 UTC m=+0.152575730 container start 1322c698d51477ecf59cbbbcaf5437b3184abf419fe39ec593685f46a72bf034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_blackburn, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 18:52:32 compute-0 podman[290717]: 2025-11-24 18:52:32.673248234 +0000 UTC m=+0.155768669 container attach 1322c698d51477ecf59cbbbcaf5437b3184abf419fe39ec593685f46a72bf034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_blackburn, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 18:52:33 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:52:33 compute-0 ceph-mon[74927]: pgmap v1140: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:33 compute-0 loving_blackburn[290733]: {
Nov 24 18:52:33 compute-0 loving_blackburn[290733]:     "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 18:52:33 compute-0 loving_blackburn[290733]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:52:33 compute-0 loving_blackburn[290733]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 18:52:33 compute-0 loving_blackburn[290733]:         "osd_id": 0,
Nov 24 18:52:33 compute-0 loving_blackburn[290733]:         "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:52:33 compute-0 loving_blackburn[290733]:         "type": "bluestore"
Nov 24 18:52:33 compute-0 loving_blackburn[290733]:     },
Nov 24 18:52:33 compute-0 loving_blackburn[290733]:     "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 18:52:33 compute-0 loving_blackburn[290733]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:52:33 compute-0 loving_blackburn[290733]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 18:52:33 compute-0 loving_blackburn[290733]:         "osd_id": 1,
Nov 24 18:52:33 compute-0 loving_blackburn[290733]:         "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:52:33 compute-0 loving_blackburn[290733]:         "type": "bluestore"
Nov 24 18:52:33 compute-0 loving_blackburn[290733]:     },
Nov 24 18:52:33 compute-0 loving_blackburn[290733]:     "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 18:52:33 compute-0 loving_blackburn[290733]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:52:33 compute-0 loving_blackburn[290733]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 18:52:33 compute-0 loving_blackburn[290733]:         "osd_id": 2,
Nov 24 18:52:33 compute-0 loving_blackburn[290733]:         "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:52:33 compute-0 loving_blackburn[290733]:         "type": "bluestore"
Nov 24 18:52:33 compute-0 loving_blackburn[290733]:     }
Nov 24 18:52:33 compute-0 loving_blackburn[290733]: }
Nov 24 18:52:33 compute-0 systemd[1]: libpod-1322c698d51477ecf59cbbbcaf5437b3184abf419fe39ec593685f46a72bf034.scope: Deactivated successfully.
Nov 24 18:52:33 compute-0 podman[290717]: 2025-11-24 18:52:33.570729616 +0000 UTC m=+1.053249961 container died 1322c698d51477ecf59cbbbcaf5437b3184abf419fe39ec593685f46a72bf034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 24 18:52:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fdc52a8dafe7f3466f8cf63eccc35dac25ff6b17e317be9120c31d551e4c081-merged.mount: Deactivated successfully.
Nov 24 18:52:33 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1141: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:33 compute-0 podman[290717]: 2025-11-24 18:52:33.827663399 +0000 UTC m=+1.310183744 container remove 1322c698d51477ecf59cbbbcaf5437b3184abf419fe39ec593685f46a72bf034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:52:33 compute-0 systemd[1]: libpod-conmon-1322c698d51477ecf59cbbbcaf5437b3184abf419fe39ec593685f46a72bf034.scope: Deactivated successfully.
Nov 24 18:52:33 compute-0 sudo[290612]: pam_unix(sudo:session): session closed for user root
Nov 24 18:52:33 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:52:34 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:52:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:52:34
Nov 24 18:52:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:52:34 compute-0 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 18:52:34 compute-0 ceph-mgr[75218]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', 'volumes', '.mgr', 'images', 'backups']
Nov 24 18:52:34 compute-0 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 18:52:34 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:52:34 compute-0 ceph-mon[74927]: pgmap v1141: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:52:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:52:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:52:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:52:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:52:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:52:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:52:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:52:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:52:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:52:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:52:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:52:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:52:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:52:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:52:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:52:35 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:52:35 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev d795b0c1-d3a8-482e-9732-c8194ac4f061 does not exist
Nov 24 18:52:35 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev b29f5b7b-6ab0-47d7-a28f-600694df12d0 does not exist
Nov 24 18:52:35 compute-0 sudo[290782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:52:35 compute-0 sudo[290782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:52:35 compute-0 sudo[290782]: pam_unix(sudo:session): session closed for user root
Nov 24 18:52:35 compute-0 sudo[290807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:52:35 compute-0 sudo[290807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:52:35 compute-0 sudo[290807]: pam_unix(sudo:session): session closed for user root
Nov 24 18:52:35 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1142: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:35 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:52:35 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:52:37 compute-0 ceph-mon[74927]: pgmap v1142: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:37 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1143: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:38 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:52:39 compute-0 ceph-mon[74927]: pgmap v1143: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:39 compute-0 podman[290832]: 2025-11-24 18:52:39.248596521 +0000 UTC m=+0.063362648 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 24 18:52:39 compute-0 podman[290833]: 2025-11-24 18:52:39.250752184 +0000 UTC m=+0.065371947 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 18:52:39 compute-0 podman[290834]: 2025-11-24 18:52:39.274114628 +0000 UTC m=+0.085046670 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:52:39 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1144: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:41 compute-0 ceph-mon[74927]: pgmap v1144: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:41 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1145: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:43 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:52:43 compute-0 ceph-mon[74927]: pgmap v1145: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:52:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:52:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:52:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:52:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 18:52:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:52:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 18:52:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:52:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:52:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:52:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 18:52:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:52:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 18:52:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:52:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:52:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:52:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 18:52:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:52:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 18:52:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:52:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:52:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:52:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 18:52:43 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1146: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:45 compute-0 ceph-mon[74927]: pgmap v1146: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:45 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1147: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:46 compute-0 sudo[282605]: pam_unix(sudo:session): session closed for user root
Nov 24 18:52:46 compute-0 sshd-session[282580]: Received disconnect from 192.168.122.10 port 41606:11: disconnected by user
Nov 24 18:52:46 compute-0 sshd-session[282580]: Disconnected from user zuul 192.168.122.10 port 41606
Nov 24 18:52:46 compute-0 sshd-session[282531]: pam_unix(sshd:session): session closed for user zuul
Nov 24 18:52:46 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Nov 24 18:52:46 compute-0 systemd[1]: session-54.scope: Consumed 2min 37.774s CPU time, 819.5M memory peak, read 286.7M from disk, written 149.2M to disk.
Nov 24 18:52:46 compute-0 systemd-logind[822]: Session 54 logged out. Waiting for processes to exit.
Nov 24 18:52:46 compute-0 systemd-logind[822]: Removed session 54.
Nov 24 18:52:46 compute-0 sshd-session[290892]: Accepted publickey for zuul from 192.168.122.10 port 41588 ssh2: ECDSA SHA256:jrbRhTO0vQ5011EeK1ZbrK2vK+fcQyIDL9kUBqDHBYY
Nov 24 18:52:46 compute-0 systemd-logind[822]: New session 55 of user zuul.
Nov 24 18:52:46 compute-0 systemd[1]: Started Session 55 of User zuul.
Nov 24 18:52:46 compute-0 sshd-session[290892]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 18:52:46 compute-0 sudo[290896]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2025-11-24-fnhifsd.tar.xz
Nov 24 18:52:46 compute-0 sudo[290896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:52:46 compute-0 sudo[290896]: pam_unix(sudo:session): session closed for user root
Nov 24 18:52:46 compute-0 sshd-session[290895]: Received disconnect from 192.168.122.10 port 41588:11: disconnected by user
Nov 24 18:52:46 compute-0 sshd-session[290895]: Disconnected from user zuul 192.168.122.10 port 41588
Nov 24 18:52:46 compute-0 sshd-session[290892]: pam_unix(sshd:session): session closed for user zuul
Nov 24 18:52:46 compute-0 systemd-logind[822]: Session 55 logged out. Waiting for processes to exit.
Nov 24 18:52:46 compute-0 systemd[1]: session-55.scope: Deactivated successfully.
Nov 24 18:52:46 compute-0 systemd-logind[822]: Removed session 55.
Nov 24 18:52:46 compute-0 sshd-session[290921]: Accepted publickey for zuul from 192.168.122.10 port 41596 ssh2: ECDSA SHA256:jrbRhTO0vQ5011EeK1ZbrK2vK+fcQyIDL9kUBqDHBYY
Nov 24 18:52:46 compute-0 systemd-logind[822]: New session 56 of user zuul.
Nov 24 18:52:46 compute-0 systemd[1]: Started Session 56 of User zuul.
Nov 24 18:52:46 compute-0 sshd-session[290921]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 18:52:46 compute-0 sudo[290925]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Nov 24 18:52:46 compute-0 sudo[290925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:52:46 compute-0 sudo[290925]: pam_unix(sudo:session): session closed for user root
Nov 24 18:52:46 compute-0 sshd-session[290924]: Received disconnect from 192.168.122.10 port 41596:11: disconnected by user
Nov 24 18:52:46 compute-0 sshd-session[290924]: Disconnected from user zuul 192.168.122.10 port 41596
Nov 24 18:52:46 compute-0 sshd-session[290921]: pam_unix(sshd:session): session closed for user zuul
Nov 24 18:52:46 compute-0 systemd[1]: session-56.scope: Deactivated successfully.
Nov 24 18:52:46 compute-0 systemd-logind[822]: Session 56 logged out. Waiting for processes to exit.
Nov 24 18:52:46 compute-0 systemd-logind[822]: Removed session 56.
Nov 24 18:52:47 compute-0 ceph-mon[74927]: pgmap v1147: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:47 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1148: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:48 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:52:49 compute-0 ceph-mon[74927]: pgmap v1148: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:49 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1149: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:49 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 24 18:52:49 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 24 18:52:51 compute-0 ceph-mon[74927]: pgmap v1149: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:51 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1150: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:53 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:52:53 compute-0 ceph-mon[74927]: pgmap v1150: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:53 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1151: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:55 compute-0 ceph-mon[74927]: pgmap v1151: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:55 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1152: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:57 compute-0 ceph-mon[74927]: pgmap v1152: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:57 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1153: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:58 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:52:59 compute-0 ceph-mon[74927]: pgmap v1153: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:52:59 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1154: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:01 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1155: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:01 compute-0 ceph-mon[74927]: pgmap v1154: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:03 compute-0 ceph-mon[74927]: pgmap v1155: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:03 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:53:03 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1156: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:53:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:53:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:53:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:53:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:53:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:53:05 compute-0 ceph-mon[74927]: pgmap v1156: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:05 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1157: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:07 compute-0 ceph-mon[74927]: pgmap v1157: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:07 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1158: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:08 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:53:09 compute-0 ceph-mon[74927]: pgmap v1158: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:09 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1159: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:09 compute-0 podman[290954]: 2025-11-24 18:53:09.961625185 +0000 UTC m=+0.052487900 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 18:53:09 compute-0 podman[290956]: 2025-11-24 18:53:09.994657997 +0000 UTC m=+0.080928389 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:53:09 compute-0 podman[290955]: 2025-11-24 18:53:09.994673687 +0000 UTC m=+0.081578745 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.build-date=20251118)
Nov 24 18:53:11 compute-0 ceph-mon[74927]: pgmap v1159: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:11 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1160: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:12 compute-0 nova_compute[270693]: 2025-11-24 18:53:12.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:53:12 compute-0 nova_compute[270693]: 2025-11-24 18:53:12.529 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 18:53:12 compute-0 nova_compute[270693]: 2025-11-24 18:53:12.529 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 18:53:12 compute-0 nova_compute[270693]: 2025-11-24 18:53:12.548 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 18:53:12 compute-0 nova_compute[270693]: 2025-11-24 18:53:12.548 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:53:12 compute-0 nova_compute[270693]: 2025-11-24 18:53:12.549 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 24 18:53:12 compute-0 nova_compute[270693]: 2025-11-24 18:53:12.568 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 24 18:53:13 compute-0 ceph-mon[74927]: pgmap v1160: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:13 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:53:13 compute-0 nova_compute[270693]: 2025-11-24 18:53:13.563 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:53:13 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1161: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:14 compute-0 nova_compute[270693]: 2025-11-24 18:53:14.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:53:14 compute-0 nova_compute[270693]: 2025-11-24 18:53:14.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:53:14 compute-0 nova_compute[270693]: 2025-11-24 18:53:14.563 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:53:14 compute-0 nova_compute[270693]: 2025-11-24 18:53:14.563 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:53:14 compute-0 nova_compute[270693]: 2025-11-24 18:53:14.563 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:53:14 compute-0 nova_compute[270693]: 2025-11-24 18:53:14.563 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 18:53:14 compute-0 nova_compute[270693]: 2025-11-24 18:53:14.564 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:53:14 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 18:53:14 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2306989323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:53:14 compute-0 nova_compute[270693]: 2025-11-24 18:53:14.995 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:53:15 compute-0 ceph-mon[74927]: pgmap v1161: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:15 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2306989323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:53:15 compute-0 nova_compute[270693]: 2025-11-24 18:53:15.172 270697 WARNING nova.virt.libvirt.driver [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 18:53:15 compute-0 nova_compute[270693]: 2025-11-24 18:53:15.173 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4993MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 18:53:15 compute-0 nova_compute[270693]: 2025-11-24 18:53:15.173 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:53:15 compute-0 nova_compute[270693]: 2025-11-24 18:53:15.174 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:53:15 compute-0 nova_compute[270693]: 2025-11-24 18:53:15.496 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 18:53:15 compute-0 nova_compute[270693]: 2025-11-24 18:53:15.497 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 18:53:15 compute-0 nova_compute[270693]: 2025-11-24 18:53:15.516 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:53:15 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1162: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:15 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 18:53:15 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1174018674' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:53:15 compute-0 nova_compute[270693]: 2025-11-24 18:53:15.990 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:53:15 compute-0 nova_compute[270693]: 2025-11-24 18:53:15.994 270697 DEBUG nova.compute.provider_tree [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed in ProviderTree for provider: d1cce7ec-de83-4810-91f8-1852891da8a6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 18:53:16 compute-0 nova_compute[270693]: 2025-11-24 18:53:16.011 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed for provider d1cce7ec-de83-4810-91f8-1852891da8a6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 18:53:16 compute-0 nova_compute[270693]: 2025-11-24 18:53:16.012 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 18:53:16 compute-0 nova_compute[270693]: 2025-11-24 18:53:16.013 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:53:16 compute-0 nova_compute[270693]: 2025-11-24 18:53:16.013 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:53:16 compute-0 nova_compute[270693]: 2025-11-24 18:53:16.013 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 24 18:53:16 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1174018674' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:53:17 compute-0 ceph-mon[74927]: pgmap v1162: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:17 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1163: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:18 compute-0 nova_compute[270693]: 2025-11-24 18:53:18.025 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:53:18 compute-0 nova_compute[270693]: 2025-11-24 18:53:18.025 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:53:18 compute-0 nova_compute[270693]: 2025-11-24 18:53:18.026 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:53:18 compute-0 nova_compute[270693]: 2025-11-24 18:53:18.026 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:53:18 compute-0 nova_compute[270693]: 2025-11-24 18:53:18.026 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 18:53:18 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:53:18 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Nov 24 18:53:18 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:53:18.166772) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 18:53:18 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Nov 24 18:53:18 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010398166806, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1090, "num_deletes": 256, "total_data_size": 1432625, "memory_usage": 1460224, "flush_reason": "Manual Compaction"}
Nov 24 18:53:18 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Nov 24 18:53:18 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010398178863, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1407679, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23270, "largest_seqno": 24359, "table_properties": {"data_size": 1402369, "index_size": 2642, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12221, "raw_average_key_size": 19, "raw_value_size": 1391260, "raw_average_value_size": 2233, "num_data_blocks": 119, "num_entries": 623, "num_filter_entries": 623, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764010312, "oldest_key_time": 1764010312, "file_creation_time": 1764010398, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Nov 24 18:53:18 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 12193 microseconds, and 7135 cpu microseconds.
Nov 24 18:53:18 compute-0 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 18:53:18 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:53:18.178964) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1407679 bytes OK
Nov 24 18:53:18 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:53:18.178985) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Nov 24 18:53:18 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:53:18.180802) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Nov 24 18:53:18 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:53:18.180824) EVENT_LOG_v1 {"time_micros": 1764010398180817, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 18:53:18 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:53:18.180844) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 18:53:18 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1427307, prev total WAL file size 1427307, number of live WAL files 2.
Nov 24 18:53:18 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:53:18 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:53:18.181721) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353033' seq:72057594037927935, type:22 .. '6C6F676D00373535' seq:0, type:0; will stop at (end)
Nov 24 18:53:18 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 18:53:18 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1374KB)], [53(8966KB)]
Nov 24 18:53:18 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010398181760, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 10589540, "oldest_snapshot_seqno": -1}
Nov 24 18:53:18 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4950 keys, 10495589 bytes, temperature: kUnknown
Nov 24 18:53:18 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010398259600, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 10495589, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10457638, "index_size": 24465, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12421, "raw_key_size": 122744, "raw_average_key_size": 24, "raw_value_size": 10363321, "raw_average_value_size": 2093, "num_data_blocks": 1027, "num_entries": 4950, "num_filter_entries": 4950, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008324, "oldest_key_time": 0, "file_creation_time": 1764010398, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Nov 24 18:53:18 compute-0 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 18:53:18 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:53:18.259961) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 10495589 bytes
Nov 24 18:53:18 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:53:18.261452) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.9 rd, 134.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 8.8 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(15.0) write-amplify(7.5) OK, records in: 5474, records dropped: 524 output_compression: NoCompression
Nov 24 18:53:18 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:53:18.261467) EVENT_LOG_v1 {"time_micros": 1764010398261460, "job": 28, "event": "compaction_finished", "compaction_time_micros": 77899, "compaction_time_cpu_micros": 28798, "output_level": 6, "num_output_files": 1, "total_output_size": 10495589, "num_input_records": 5474, "num_output_records": 4950, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 18:53:18 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:53:18 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010398261756, "job": 28, "event": "table_file_deletion", "file_number": 55}
Nov 24 18:53:18 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:53:18 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010398263271, "job": 28, "event": "table_file_deletion", "file_number": 53}
Nov 24 18:53:18 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:53:18.181608) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:53:18 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:53:18.263326) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:53:18 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:53:18.263331) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:53:18 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:53:18.263332) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:53:18 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:53:18.263334) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:53:18 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:53:18.263335) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:53:18 compute-0 nova_compute[270693]: 2025-11-24 18:53:18.528 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:53:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 18:53:19 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3221981061' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 18:53:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 18:53:19 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3221981061' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 18:53:19 compute-0 ceph-mon[74927]: pgmap v1163: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:19 compute-0 ceph-mon[74927]: from='client.? 192.168.122.10:0/3221981061' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 18:53:19 compute-0 ceph-mon[74927]: from='client.? 192.168.122.10:0/3221981061' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 18:53:19 compute-0 nova_compute[270693]: 2025-11-24 18:53:19.528 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:53:19 compute-0 nova_compute[270693]: 2025-11-24 18:53:19.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:53:19 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1164: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:21 compute-0 ceph-mon[74927]: pgmap v1164: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:21 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1165: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:53:22.748 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:53:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:53:22.749 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:53:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:53:22.749 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:53:23 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:53:23 compute-0 ceph-mon[74927]: pgmap v1165: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:23 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1166: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:25 compute-0 ceph-mon[74927]: pgmap v1166: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:25 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1167: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:27 compute-0 ceph-mon[74927]: pgmap v1167: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:27 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1168: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:28 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:53:29 compute-0 ceph-mon[74927]: pgmap v1168: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:29 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1169: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:31 compute-0 ceph-mon[74927]: pgmap v1169: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:31 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1170: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:33 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:53:33 compute-0 ceph-mon[74927]: pgmap v1170: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:33 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1171: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:53:34
Nov 24 18:53:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:53:34 compute-0 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 18:53:34 compute-0 ceph-mgr[75218]: [balancer INFO root] pools ['images', 'volumes', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'backups', '.mgr', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Nov 24 18:53:34 compute-0 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 18:53:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:53:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:53:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:53:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:53:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:53:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:53:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:53:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:53:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:53:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:53:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:53:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:53:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:53:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:53:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:53:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:53:35 compute-0 ceph-mon[74927]: pgmap v1171: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:35 compute-0 sudo[291062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:53:35 compute-0 sudo[291062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:53:35 compute-0 sudo[291062]: pam_unix(sudo:session): session closed for user root
Nov 24 18:53:35 compute-0 sudo[291087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:53:35 compute-0 sudo[291087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:53:35 compute-0 sudo[291087]: pam_unix(sudo:session): session closed for user root
Nov 24 18:53:35 compute-0 sudo[291112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:53:35 compute-0 sudo[291112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:53:35 compute-0 sudo[291112]: pam_unix(sudo:session): session closed for user root
Nov 24 18:53:35 compute-0 sudo[291137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 18:53:35 compute-0 sudo[291137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:53:35 compute-0 sudo[291137]: pam_unix(sudo:session): session closed for user root
Nov 24 18:53:35 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:53:35 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:53:35 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:53:35 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:53:35 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:53:35 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:53:35 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 92ddccfb-78c6-4518-83fe-9baecb4219d7 does not exist
Nov 24 18:53:35 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev fff73e79-12e0-4e24-a221-21d6770c88ef does not exist
Nov 24 18:53:35 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev a8231278-e104-4972-b931-8ba200fcbbd0 does not exist
Nov 24 18:53:35 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 18:53:35 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:53:35 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 18:53:35 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:53:35 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:53:35 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:53:35 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1172: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:35 compute-0 sudo[291194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:53:35 compute-0 sudo[291194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:53:35 compute-0 sudo[291194]: pam_unix(sudo:session): session closed for user root
Nov 24 18:53:36 compute-0 sudo[291219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:53:36 compute-0 sudo[291219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:53:36 compute-0 sudo[291219]: pam_unix(sudo:session): session closed for user root
Nov 24 18:53:36 compute-0 sudo[291244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:53:36 compute-0 sudo[291244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:53:36 compute-0 sudo[291244]: pam_unix(sudo:session): session closed for user root
Nov 24 18:53:36 compute-0 sudo[291269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 18:53:36 compute-0 sudo[291269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:53:36 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:53:36 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:53:36 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:53:36 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:53:36 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:53:36 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:53:36 compute-0 podman[291335]: 2025-11-24 18:53:36.430846321 +0000 UTC m=+0.042769892 container create 0dd1e49f4cc769b26f293daeb8411924fdc53f699f20a058c64d849603a4ee37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Nov 24 18:53:36 compute-0 systemd[1]: Started libpod-conmon-0dd1e49f4cc769b26f293daeb8411924fdc53f699f20a058c64d849603a4ee37.scope.
Nov 24 18:53:36 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:53:36 compute-0 podman[291335]: 2025-11-24 18:53:36.409481286 +0000 UTC m=+0.021404897 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:53:36 compute-0 podman[291335]: 2025-11-24 18:53:36.518589737 +0000 UTC m=+0.130513358 container init 0dd1e49f4cc769b26f293daeb8411924fdc53f699f20a058c64d849603a4ee37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_haslett, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:53:36 compute-0 podman[291335]: 2025-11-24 18:53:36.525148799 +0000 UTC m=+0.137072400 container start 0dd1e49f4cc769b26f293daeb8411924fdc53f699f20a058c64d849603a4ee37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:53:36 compute-0 podman[291335]: 2025-11-24 18:53:36.529470085 +0000 UTC m=+0.141393696 container attach 0dd1e49f4cc769b26f293daeb8411924fdc53f699f20a058c64d849603a4ee37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:53:36 compute-0 musing_haslett[291351]: 167 167
Nov 24 18:53:36 compute-0 systemd[1]: libpod-0dd1e49f4cc769b26f293daeb8411924fdc53f699f20a058c64d849603a4ee37.scope: Deactivated successfully.
Nov 24 18:53:36 compute-0 podman[291335]: 2025-11-24 18:53:36.53294696 +0000 UTC m=+0.144870561 container died 0dd1e49f4cc769b26f293daeb8411924fdc53f699f20a058c64d849603a4ee37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_haslett, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:53:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4fc2463141212a8c1223c651dfbeda368dad2249f66738b0e404892e83a5abf-merged.mount: Deactivated successfully.
Nov 24 18:53:36 compute-0 podman[291335]: 2025-11-24 18:53:36.580199981 +0000 UTC m=+0.192123552 container remove 0dd1e49f4cc769b26f293daeb8411924fdc53f699f20a058c64d849603a4ee37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_haslett, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 18:53:36 compute-0 systemd[1]: libpod-conmon-0dd1e49f4cc769b26f293daeb8411924fdc53f699f20a058c64d849603a4ee37.scope: Deactivated successfully.
Nov 24 18:53:36 compute-0 podman[291377]: 2025-11-24 18:53:36.784807939 +0000 UTC m=+0.047475998 container create 32723a7a7b90fc4fe8f24b87079af7f0e5637b27d8661d73aeaa9e6962c0be21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 24 18:53:36 compute-0 systemd[1]: Started libpod-conmon-32723a7a7b90fc4fe8f24b87079af7f0e5637b27d8661d73aeaa9e6962c0be21.scope.
Nov 24 18:53:36 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:53:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c38a4345d4b60651abf2ce1b87ec1fb9a74313652823c8ecb7d8c77e9895cdb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:53:36 compute-0 podman[291377]: 2025-11-24 18:53:36.760218715 +0000 UTC m=+0.022886844 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:53:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c38a4345d4b60651abf2ce1b87ec1fb9a74313652823c8ecb7d8c77e9895cdb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:53:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c38a4345d4b60651abf2ce1b87ec1fb9a74313652823c8ecb7d8c77e9895cdb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:53:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c38a4345d4b60651abf2ce1b87ec1fb9a74313652823c8ecb7d8c77e9895cdb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:53:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c38a4345d4b60651abf2ce1b87ec1fb9a74313652823c8ecb7d8c77e9895cdb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:53:36 compute-0 podman[291377]: 2025-11-24 18:53:36.868178357 +0000 UTC m=+0.130846416 container init 32723a7a7b90fc4fe8f24b87079af7f0e5637b27d8661d73aeaa9e6962c0be21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:53:36 compute-0 podman[291377]: 2025-11-24 18:53:36.874113833 +0000 UTC m=+0.136781872 container start 32723a7a7b90fc4fe8f24b87079af7f0e5637b27d8661d73aeaa9e6962c0be21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hodgkin, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 18:53:36 compute-0 podman[291377]: 2025-11-24 18:53:36.877958368 +0000 UTC m=+0.140626407 container attach 32723a7a7b90fc4fe8f24b87079af7f0e5637b27d8661d73aeaa9e6962c0be21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hodgkin, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:53:37 compute-0 ceph-mon[74927]: pgmap v1172: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:37 compute-0 epic_hodgkin[291394]: --> passed data devices: 0 physical, 3 LVM
Nov 24 18:53:37 compute-0 epic_hodgkin[291394]: --> relative data size: 1.0
Nov 24 18:53:37 compute-0 epic_hodgkin[291394]: --> All data devices are unavailable
Nov 24 18:53:37 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1173: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:37 compute-0 systemd[1]: libpod-32723a7a7b90fc4fe8f24b87079af7f0e5637b27d8661d73aeaa9e6962c0be21.scope: Deactivated successfully.
Nov 24 18:53:37 compute-0 systemd[1]: libpod-32723a7a7b90fc4fe8f24b87079af7f0e5637b27d8661d73aeaa9e6962c0be21.scope: Consumed 1.023s CPU time.
Nov 24 18:53:37 compute-0 podman[291377]: 2025-11-24 18:53:37.951289941 +0000 UTC m=+1.213957990 container died 32723a7a7b90fc4fe8f24b87079af7f0e5637b27d8661d73aeaa9e6962c0be21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Nov 24 18:53:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c38a4345d4b60651abf2ce1b87ec1fb9a74313652823c8ecb7d8c77e9895cdb-merged.mount: Deactivated successfully.
Nov 24 18:53:38 compute-0 podman[291377]: 2025-11-24 18:53:38.007363129 +0000 UTC m=+1.270031168 container remove 32723a7a7b90fc4fe8f24b87079af7f0e5637b27d8661d73aeaa9e6962c0be21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:53:38 compute-0 systemd[1]: libpod-conmon-32723a7a7b90fc4fe8f24b87079af7f0e5637b27d8661d73aeaa9e6962c0be21.scope: Deactivated successfully.
Nov 24 18:53:38 compute-0 sudo[291269]: pam_unix(sudo:session): session closed for user root
Nov 24 18:53:38 compute-0 sudo[291435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:53:38 compute-0 sudo[291435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:53:38 compute-0 sudo[291435]: pam_unix(sudo:session): session closed for user root
Nov 24 18:53:38 compute-0 sudo[291460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:53:38 compute-0 sudo[291460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:53:38 compute-0 sudo[291460]: pam_unix(sudo:session): session closed for user root
Nov 24 18:53:38 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:53:38 compute-0 sudo[291485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:53:38 compute-0 sudo[291485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:53:38 compute-0 sudo[291485]: pam_unix(sudo:session): session closed for user root
Nov 24 18:53:38 compute-0 sudo[291510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- lvm list --format json
Nov 24 18:53:38 compute-0 sudo[291510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:53:38 compute-0 podman[291575]: 2025-11-24 18:53:38.709123822 +0000 UTC m=+0.059287958 container create 3281aa45d6f8b150f70ada932affbfecbafab621f0c27285ffeab42f758ceb6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 24 18:53:38 compute-0 systemd[1]: Started libpod-conmon-3281aa45d6f8b150f70ada932affbfecbafab621f0c27285ffeab42f758ceb6b.scope.
Nov 24 18:53:38 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:53:38 compute-0 podman[291575]: 2025-11-24 18:53:38.69274516 +0000 UTC m=+0.042909316 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:53:38 compute-0 podman[291575]: 2025-11-24 18:53:38.827634484 +0000 UTC m=+0.177798660 container init 3281aa45d6f8b150f70ada932affbfecbafab621f0c27285ffeab42f758ceb6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_leakey, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:53:38 compute-0 podman[291575]: 2025-11-24 18:53:38.838194074 +0000 UTC m=+0.188358240 container start 3281aa45d6f8b150f70ada932affbfecbafab621f0c27285ffeab42f758ceb6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_leakey, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 24 18:53:38 compute-0 festive_leakey[291591]: 167 167
Nov 24 18:53:38 compute-0 systemd[1]: libpod-3281aa45d6f8b150f70ada932affbfecbafab621f0c27285ffeab42f758ceb6b.scope: Deactivated successfully.
Nov 24 18:53:38 compute-0 podman[291575]: 2025-11-24 18:53:38.853047029 +0000 UTC m=+0.203211195 container attach 3281aa45d6f8b150f70ada932affbfecbafab621f0c27285ffeab42f758ceb6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 18:53:38 compute-0 podman[291575]: 2025-11-24 18:53:38.853576472 +0000 UTC m=+0.203740608 container died 3281aa45d6f8b150f70ada932affbfecbafab621f0c27285ffeab42f758ceb6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_leakey, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:53:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-429a8dea5b3e8e5eed04137ddb42b4595e998c737af63b376b2a322f252ea9bd-merged.mount: Deactivated successfully.
Nov 24 18:53:38 compute-0 podman[291575]: 2025-11-24 18:53:38.91127421 +0000 UTC m=+0.261438366 container remove 3281aa45d6f8b150f70ada932affbfecbafab621f0c27285ffeab42f758ceb6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_leakey, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:53:38 compute-0 systemd[1]: libpod-conmon-3281aa45d6f8b150f70ada932affbfecbafab621f0c27285ffeab42f758ceb6b.scope: Deactivated successfully.
Nov 24 18:53:39 compute-0 podman[291614]: 2025-11-24 18:53:39.095018175 +0000 UTC m=+0.039326088 container create 9a9b16513af6bcdee517a2d470a5ecb4f34d5732da517da47da01f727960fef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_goldwasser, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 24 18:53:39 compute-0 systemd[1]: Started libpod-conmon-9a9b16513af6bcdee517a2d470a5ecb4f34d5732da517da47da01f727960fef2.scope.
Nov 24 18:53:39 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:53:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/022c6883d2efbc17eacd697e5b2adf99eb3fcd4f7b54df9728a0104881d24dae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:53:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/022c6883d2efbc17eacd697e5b2adf99eb3fcd4f7b54df9728a0104881d24dae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:53:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/022c6883d2efbc17eacd697e5b2adf99eb3fcd4f7b54df9728a0104881d24dae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:53:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/022c6883d2efbc17eacd697e5b2adf99eb3fcd4f7b54df9728a0104881d24dae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:53:39 compute-0 podman[291614]: 2025-11-24 18:53:39.169475014 +0000 UTC m=+0.113782947 container init 9a9b16513af6bcdee517a2d470a5ecb4f34d5732da517da47da01f727960fef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 18:53:39 compute-0 podman[291614]: 2025-11-24 18:53:39.174963189 +0000 UTC m=+0.119271102 container start 9a9b16513af6bcdee517a2d470a5ecb4f34d5732da517da47da01f727960fef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_goldwasser, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:53:39 compute-0 podman[291614]: 2025-11-24 18:53:39.07979391 +0000 UTC m=+0.024101843 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:53:39 compute-0 podman[291614]: 2025-11-24 18:53:39.178024404 +0000 UTC m=+0.122332337 container attach 9a9b16513af6bcdee517a2d470a5ecb4f34d5732da517da47da01f727960fef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]: {
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:     "0": [
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:         {
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:             "devices": [
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:                 "/dev/loop3"
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:             ],
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:             "lv_name": "ceph_lv0",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:             "lv_size": "21470642176",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:             "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:             "name": "ceph_lv0",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:             "tags": {
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:                 "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:                 "ceph.cluster_name": "ceph",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:                 "ceph.crush_device_class": "",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:                 "ceph.encrypted": "0",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:                 "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:                 "ceph.osd_id": "0",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:                 "ceph.type": "block",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:                 "ceph.vdo": "0"
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:             },
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:             "type": "block",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:             "vg_name": "ceph_vg0"
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:         }
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:     ],
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:     "1": [
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:         {
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:             "devices": [
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:                 "/dev/loop4"
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:             ],
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:             "lv_name": "ceph_lv1",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:             "lv_size": "21470642176",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:             "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:             "name": "ceph_lv1",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:             "tags": {
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:                 "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:                 "ceph.cluster_name": "ceph",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:                 "ceph.crush_device_class": "",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:                 "ceph.encrypted": "0",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:                 "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:                 "ceph.osd_id": "1",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:                 "ceph.type": "block",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:                 "ceph.vdo": "0"
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:             },
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:             "type": "block",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:             "vg_name": "ceph_vg1"
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:         }
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:     ],
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:     "2": [
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:         {
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:             "devices": [
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:                 "/dev/loop5"
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:             ],
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:             "lv_name": "ceph_lv2",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:             "lv_size": "21470642176",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:             "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:             "name": "ceph_lv2",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:             "tags": {
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:                 "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:                 "ceph.cluster_name": "ceph",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:                 "ceph.crush_device_class": "",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:                 "ceph.encrypted": "0",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:                 "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:                 "ceph.osd_id": "2",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:                 "ceph.type": "block",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:                 "ceph.vdo": "0"
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:             },
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:             "type": "block",
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:             "vg_name": "ceph_vg2"
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:         }
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]:     ]
Nov 24 18:53:39 compute-0 tender_goldwasser[291630]: }
Nov 24 18:53:39 compute-0 systemd[1]: libpod-9a9b16513af6bcdee517a2d470a5ecb4f34d5732da517da47da01f727960fef2.scope: Deactivated successfully.
Nov 24 18:53:39 compute-0 podman[291614]: 2025-11-24 18:53:39.937059395 +0000 UTC m=+0.881367318 container died 9a9b16513af6bcdee517a2d470a5ecb4f34d5732da517da47da01f727960fef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_goldwasser, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:53:39 compute-0 ceph-mon[74927]: pgmap v1173: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:39 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1174: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-022c6883d2efbc17eacd697e5b2adf99eb3fcd4f7b54df9728a0104881d24dae-merged.mount: Deactivated successfully.
Nov 24 18:53:40 compute-0 podman[291614]: 2025-11-24 18:53:40.038242471 +0000 UTC m=+0.982550384 container remove 9a9b16513af6bcdee517a2d470a5ecb4f34d5732da517da47da01f727960fef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:53:40 compute-0 systemd[1]: libpod-conmon-9a9b16513af6bcdee517a2d470a5ecb4f34d5732da517da47da01f727960fef2.scope: Deactivated successfully.
Nov 24 18:53:40 compute-0 sudo[291510]: pam_unix(sudo:session): session closed for user root
Nov 24 18:53:40 compute-0 podman[291653]: 2025-11-24 18:53:40.07602041 +0000 UTC m=+0.055531666 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 24 18:53:40 compute-0 podman[291654]: 2025-11-24 18:53:40.100314067 +0000 UTC m=+0.075655930 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 18:53:40 compute-0 podman[291655]: 2025-11-24 18:53:40.10574646 +0000 UTC m=+0.078172812 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 24 18:53:40 compute-0 sudo[291706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:53:40 compute-0 sudo[291706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:53:40 compute-0 sudo[291706]: pam_unix(sudo:session): session closed for user root
Nov 24 18:53:40 compute-0 sudo[291737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:53:40 compute-0 sudo[291737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:53:40 compute-0 sudo[291737]: pam_unix(sudo:session): session closed for user root
Nov 24 18:53:40 compute-0 sudo[291762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:53:40 compute-0 sudo[291762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:53:40 compute-0 sudo[291762]: pam_unix(sudo:session): session closed for user root
Nov 24 18:53:40 compute-0 sudo[291787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- raw list --format json
Nov 24 18:53:40 compute-0 sudo[291787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:53:40 compute-0 podman[291849]: 2025-11-24 18:53:40.589660601 +0000 UTC m=+0.063880991 container create 8c9404e5ca2ca8eea8af46f6d5e8d1db3fcc85506272f8073034f80eb82fdb48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lehmann, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:53:40 compute-0 systemd[1]: Started libpod-conmon-8c9404e5ca2ca8eea8af46f6d5e8d1db3fcc85506272f8073034f80eb82fdb48.scope.
Nov 24 18:53:40 compute-0 podman[291849]: 2025-11-24 18:53:40.547103685 +0000 UTC m=+0.021324075 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:53:40 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:53:40 compute-0 podman[291849]: 2025-11-24 18:53:40.674046614 +0000 UTC m=+0.148267024 container init 8c9404e5ca2ca8eea8af46f6d5e8d1db3fcc85506272f8073034f80eb82fdb48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 24 18:53:40 compute-0 podman[291849]: 2025-11-24 18:53:40.680684778 +0000 UTC m=+0.154905168 container start 8c9404e5ca2ca8eea8af46f6d5e8d1db3fcc85506272f8073034f80eb82fdb48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:53:40 compute-0 musing_lehmann[291867]: 167 167
Nov 24 18:53:40 compute-0 systemd[1]: libpod-8c9404e5ca2ca8eea8af46f6d5e8d1db3fcc85506272f8073034f80eb82fdb48.scope: Deactivated successfully.
Nov 24 18:53:40 compute-0 podman[291849]: 2025-11-24 18:53:40.688827268 +0000 UTC m=+0.163047688 container attach 8c9404e5ca2ca8eea8af46f6d5e8d1db3fcc85506272f8073034f80eb82fdb48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:53:40 compute-0 podman[291849]: 2025-11-24 18:53:40.689526225 +0000 UTC m=+0.163746625 container died 8c9404e5ca2ca8eea8af46f6d5e8d1db3fcc85506272f8073034f80eb82fdb48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:53:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c5e020dbc6b5e54d2f5816c5b11753e9aaf2bb408be23abb037a26e37ae5a22-merged.mount: Deactivated successfully.
Nov 24 18:53:40 compute-0 podman[291849]: 2025-11-24 18:53:40.733877935 +0000 UTC m=+0.208098325 container remove 8c9404e5ca2ca8eea8af46f6d5e8d1db3fcc85506272f8073034f80eb82fdb48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lehmann, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:53:40 compute-0 systemd[1]: libpod-conmon-8c9404e5ca2ca8eea8af46f6d5e8d1db3fcc85506272f8073034f80eb82fdb48.scope: Deactivated successfully.
Nov 24 18:53:40 compute-0 podman[291891]: 2025-11-24 18:53:40.856496458 +0000 UTC m=+0.021008148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:53:40 compute-0 podman[291891]: 2025-11-24 18:53:40.978536125 +0000 UTC m=+0.143047785 container create 0ce0ac1a451e294eab4cf1e4d0f08fe9797d7c4b80c64db74c0a3822c4cdba63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_sutherland, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 24 18:53:41 compute-0 ceph-mon[74927]: pgmap v1174: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:41 compute-0 systemd[1]: Started libpod-conmon-0ce0ac1a451e294eab4cf1e4d0f08fe9797d7c4b80c64db74c0a3822c4cdba63.scope.
Nov 24 18:53:41 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:53:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28cef2078c2105beaa96b37d39284d51575227db8dc4f7e010fdbd00e3b97e98/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:53:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28cef2078c2105beaa96b37d39284d51575227db8dc4f7e010fdbd00e3b97e98/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:53:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28cef2078c2105beaa96b37d39284d51575227db8dc4f7e010fdbd00e3b97e98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:53:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28cef2078c2105beaa96b37d39284d51575227db8dc4f7e010fdbd00e3b97e98/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:53:41 compute-0 podman[291891]: 2025-11-24 18:53:41.271347151 +0000 UTC m=+0.435858911 container init 0ce0ac1a451e294eab4cf1e4d0f08fe9797d7c4b80c64db74c0a3822c4cdba63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_sutherland, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:53:41 compute-0 podman[291891]: 2025-11-24 18:53:41.284527224 +0000 UTC m=+0.449038934 container start 0ce0ac1a451e294eab4cf1e4d0f08fe9797d7c4b80c64db74c0a3822c4cdba63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:53:41 compute-0 podman[291891]: 2025-11-24 18:53:41.319006682 +0000 UTC m=+0.483518342 container attach 0ce0ac1a451e294eab4cf1e4d0f08fe9797d7c4b80c64db74c0a3822c4cdba63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_sutherland, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 24 18:53:41 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1175: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:42 compute-0 recursing_sutherland[291908]: {
Nov 24 18:53:42 compute-0 recursing_sutherland[291908]:     "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 18:53:42 compute-0 recursing_sutherland[291908]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:53:42 compute-0 recursing_sutherland[291908]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 18:53:42 compute-0 recursing_sutherland[291908]:         "osd_id": 0,
Nov 24 18:53:42 compute-0 recursing_sutherland[291908]:         "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:53:42 compute-0 recursing_sutherland[291908]:         "type": "bluestore"
Nov 24 18:53:42 compute-0 recursing_sutherland[291908]:     },
Nov 24 18:53:42 compute-0 recursing_sutherland[291908]:     "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 18:53:42 compute-0 recursing_sutherland[291908]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:53:42 compute-0 recursing_sutherland[291908]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 18:53:42 compute-0 recursing_sutherland[291908]:         "osd_id": 1,
Nov 24 18:53:42 compute-0 recursing_sutherland[291908]:         "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:53:42 compute-0 recursing_sutherland[291908]:         "type": "bluestore"
Nov 24 18:53:42 compute-0 recursing_sutherland[291908]:     },
Nov 24 18:53:42 compute-0 recursing_sutherland[291908]:     "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 18:53:42 compute-0 recursing_sutherland[291908]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:53:42 compute-0 recursing_sutherland[291908]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 18:53:42 compute-0 recursing_sutherland[291908]:         "osd_id": 2,
Nov 24 18:53:42 compute-0 recursing_sutherland[291908]:         "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:53:42 compute-0 recursing_sutherland[291908]:         "type": "bluestore"
Nov 24 18:53:42 compute-0 recursing_sutherland[291908]:     }
Nov 24 18:53:42 compute-0 recursing_sutherland[291908]: }
Nov 24 18:53:42 compute-0 systemd[1]: libpod-0ce0ac1a451e294eab4cf1e4d0f08fe9797d7c4b80c64db74c0a3822c4cdba63.scope: Deactivated successfully.
Nov 24 18:53:42 compute-0 podman[291891]: 2025-11-24 18:53:42.225528437 +0000 UTC m=+1.390040137 container died 0ce0ac1a451e294eab4cf1e4d0f08fe9797d7c4b80c64db74c0a3822c4cdba63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_sutherland, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:53:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-28cef2078c2105beaa96b37d39284d51575227db8dc4f7e010fdbd00e3b97e98-merged.mount: Deactivated successfully.
Nov 24 18:53:42 compute-0 podman[291891]: 2025-11-24 18:53:42.358039353 +0000 UTC m=+1.522551053 container remove 0ce0ac1a451e294eab4cf1e4d0f08fe9797d7c4b80c64db74c0a3822c4cdba63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:53:42 compute-0 systemd[1]: libpod-conmon-0ce0ac1a451e294eab4cf1e4d0f08fe9797d7c4b80c64db74c0a3822c4cdba63.scope: Deactivated successfully.
Nov 24 18:53:42 compute-0 sudo[291787]: pam_unix(sudo:session): session closed for user root
Nov 24 18:53:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:53:42 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:53:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:53:42 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:53:42 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev bdceaf1f-16a9-4587-ab61-4f247ab6a6bd does not exist
Nov 24 18:53:42 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 52e54912-27e9-41b4-bd32-a511269b9c19 does not exist
Nov 24 18:53:42 compute-0 sudo[291955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:53:42 compute-0 sudo[291955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:53:42 compute-0 sudo[291955]: pam_unix(sudo:session): session closed for user root
Nov 24 18:53:42 compute-0 sudo[291980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:53:42 compute-0 sudo[291980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:53:42 compute-0 sudo[291980]: pam_unix(sudo:session): session closed for user root
Nov 24 18:53:43 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:53:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:53:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:53:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:53:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:53:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 18:53:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:53:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 18:53:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:53:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:53:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:53:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 18:53:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:53:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 18:53:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:53:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:53:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:53:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 18:53:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:53:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 18:53:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:53:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:53:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:53:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 18:53:43 compute-0 nova_compute[270693]: 2025-11-24 18:53:43.432 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:53:43 compute-0 ceph-mon[74927]: pgmap v1175: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:43 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:53:43 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:53:43 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1176: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:45 compute-0 ceph-mon[74927]: pgmap v1176: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:45 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1177: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:47 compute-0 ceph-mon[74927]: pgmap v1177: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:47 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1178: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:48 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:53:49 compute-0 ceph-mon[74927]: pgmap v1178: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:49 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1179: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:51 compute-0 ceph-mon[74927]: pgmap v1179: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:51 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1180: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:53 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:53:53 compute-0 ceph-mon[74927]: pgmap v1180: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:53 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1181: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:55 compute-0 ceph-mon[74927]: pgmap v1181: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:55 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1182: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:57 compute-0 ceph-mon[74927]: pgmap v1182: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:57 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1183: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:58 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:53:59 compute-0 ceph-mon[74927]: pgmap v1183: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:53:59 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1184: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:01 compute-0 ceph-mon[74927]: pgmap v1184: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:01 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1185: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:03 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:54:03 compute-0 ceph-mon[74927]: pgmap v1185: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:03 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1186: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:54:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:54:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:54:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:54:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:54:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:54:05 compute-0 ceph-mon[74927]: pgmap v1186: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:05 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1187: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:07 compute-0 ceph-mon[74927]: pgmap v1187: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:07 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1188: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:08 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:54:09 compute-0 ceph-mon[74927]: pgmap v1188: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:09 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1189: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:10 compute-0 podman[292007]: 2025-11-24 18:54:10.971795694 +0000 UTC m=+0.056612622 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 18:54:10 compute-0 podman[292005]: 2025-11-24 18:54:10.987831598 +0000 UTC m=+0.082320624 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 24 18:54:10 compute-0 podman[292006]: 2025-11-24 18:54:10.987863609 +0000 UTC m=+0.078291425 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 18:54:11 compute-0 ceph-mon[74927]: pgmap v1189: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:11 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1190: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:13 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:54:13 compute-0 nova_compute[270693]: 2025-11-24 18:54:13.558 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:54:13 compute-0 nova_compute[270693]: 2025-11-24 18:54:13.559 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 18:54:13 compute-0 nova_compute[270693]: 2025-11-24 18:54:13.559 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 18:54:13 compute-0 nova_compute[270693]: 2025-11-24 18:54:13.582 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 18:54:13 compute-0 ceph-mon[74927]: pgmap v1190: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:13 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1191: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:14 compute-0 nova_compute[270693]: 2025-11-24 18:54:14.528 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:54:14 compute-0 nova_compute[270693]: 2025-11-24 18:54:14.528 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:54:14 compute-0 nova_compute[270693]: 2025-11-24 18:54:14.558 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:54:14 compute-0 nova_compute[270693]: 2025-11-24 18:54:14.559 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:54:14 compute-0 nova_compute[270693]: 2025-11-24 18:54:14.559 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:54:14 compute-0 nova_compute[270693]: 2025-11-24 18:54:14.559 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 18:54:14 compute-0 nova_compute[270693]: 2025-11-24 18:54:14.559 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:54:14 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 18:54:14 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/89496242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:54:14 compute-0 nova_compute[270693]: 2025-11-24 18:54:14.966 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:54:15 compute-0 nova_compute[270693]: 2025-11-24 18:54:15.155 270697 WARNING nova.virt.libvirt.driver [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 18:54:15 compute-0 nova_compute[270693]: 2025-11-24 18:54:15.156 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5021MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 18:54:15 compute-0 nova_compute[270693]: 2025-11-24 18:54:15.157 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:54:15 compute-0 nova_compute[270693]: 2025-11-24 18:54:15.157 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:54:15 compute-0 nova_compute[270693]: 2025-11-24 18:54:15.290 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 18:54:15 compute-0 nova_compute[270693]: 2025-11-24 18:54:15.290 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 18:54:15 compute-0 nova_compute[270693]: 2025-11-24 18:54:15.361 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Refreshing inventories for resource provider d1cce7ec-de83-4810-91f8-1852891da8a6 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 24 18:54:15 compute-0 nova_compute[270693]: 2025-11-24 18:54:15.389 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Updating ProviderTree inventory for provider d1cce7ec-de83-4810-91f8-1852891da8a6 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 24 18:54:15 compute-0 nova_compute[270693]: 2025-11-24 18:54:15.390 270697 DEBUG nova.compute.provider_tree [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Updating inventory in ProviderTree for provider d1cce7ec-de83-4810-91f8-1852891da8a6 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 18:54:15 compute-0 nova_compute[270693]: 2025-11-24 18:54:15.404 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Refreshing aggregate associations for resource provider d1cce7ec-de83-4810-91f8-1852891da8a6, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 24 18:54:15 compute-0 nova_compute[270693]: 2025-11-24 18:54:15.429 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Refreshing trait associations for resource provider d1cce7ec-de83-4810-91f8-1852891da8a6, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_ACCELERATORS,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_MMX,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_FMA3,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE4A,HW_CPU_X86_F16C,HW_CPU_X86_SSE2,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_E1000 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 24 18:54:15 compute-0 nova_compute[270693]: 2025-11-24 18:54:15.450 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:54:15 compute-0 ceph-mon[74927]: pgmap v1191: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:15 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/89496242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:54:15 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 18:54:15 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1577540923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:54:15 compute-0 nova_compute[270693]: 2025-11-24 18:54:15.890 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:54:15 compute-0 nova_compute[270693]: 2025-11-24 18:54:15.896 270697 DEBUG nova.compute.provider_tree [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed in ProviderTree for provider: d1cce7ec-de83-4810-91f8-1852891da8a6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 18:54:15 compute-0 nova_compute[270693]: 2025-11-24 18:54:15.915 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed for provider d1cce7ec-de83-4810-91f8-1852891da8a6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 18:54:15 compute-0 nova_compute[270693]: 2025-11-24 18:54:15.917 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 18:54:15 compute-0 nova_compute[270693]: 2025-11-24 18:54:15.917 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:54:15 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1192: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:16 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1577540923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:54:17 compute-0 ceph-mon[74927]: pgmap v1192: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:17 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1193: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:18 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:54:18 compute-0 nova_compute[270693]: 2025-11-24 18:54:18.918 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:54:18 compute-0 nova_compute[270693]: 2025-11-24 18:54:18.918 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:54:18 compute-0 nova_compute[270693]: 2025-11-24 18:54:18.918 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:54:18 compute-0 nova_compute[270693]: 2025-11-24 18:54:18.919 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:54:18 compute-0 nova_compute[270693]: 2025-11-24 18:54:18.919 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:54:18 compute-0 nova_compute[270693]: 2025-11-24 18:54:18.919 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 18:54:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 18:54:19 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3833330106' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 18:54:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 18:54:19 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3833330106' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 18:54:19 compute-0 nova_compute[270693]: 2025-11-24 18:54:19.528 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:54:19 compute-0 ceph-mon[74927]: pgmap v1193: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:19 compute-0 ceph-mon[74927]: from='client.? 192.168.122.10:0/3833330106' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 18:54:19 compute-0 ceph-mon[74927]: from='client.? 192.168.122.10:0/3833330106' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 18:54:19 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1194: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:21 compute-0 ceph-mon[74927]: pgmap v1194: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:21 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1195: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:54:22.750 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:54:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:54:22.751 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:54:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:54:22.751 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:54:23 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:54:23 compute-0 ceph-mon[74927]: pgmap v1195: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:23 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1196: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:25 compute-0 ceph-mon[74927]: pgmap v1196: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:25 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1197: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:27 compute-0 ceph-mon[74927]: pgmap v1197: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:27 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1198: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:28 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:54:29 compute-0 ceph-mon[74927]: pgmap v1198: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:29 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1199: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:31 compute-0 ceph-mon[74927]: pgmap v1199: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:31 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1200: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:33 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:54:33 compute-0 ceph-mon[74927]: pgmap v1200: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:33 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1201: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:54:34
Nov 24 18:54:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:54:34 compute-0 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 18:54:34 compute-0 ceph-mgr[75218]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'images', '.mgr', 'volumes', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta']
Nov 24 18:54:34 compute-0 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 18:54:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:54:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:54:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:54:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:54:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:54:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:54:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:54:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:54:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:54:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:54:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:54:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:54:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:54:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:54:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:54:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:54:35 compute-0 ceph-mon[74927]: pgmap v1201: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:35 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1202: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:37 compute-0 ceph-mon[74927]: pgmap v1202: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:37 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1203: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:38 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:54:39 compute-0 ceph-mon[74927]: pgmap v1203: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:39 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1204: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:41 compute-0 ceph-mon[74927]: pgmap v1204: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:41 compute-0 podman[292116]: 2025-11-24 18:54:41.970551208 +0000 UTC m=+0.062083287 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:54:41 compute-0 podman[292114]: 2025-11-24 18:54:41.970538277 +0000 UTC m=+0.066122095 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent)
Nov 24 18:54:41 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1205: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:42 compute-0 podman[292115]: 2025-11-24 18:54:42.000598066 +0000 UTC m=+0.093212981 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 24 18:54:42 compute-0 sudo[292172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:54:42 compute-0 sudo[292172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:54:42 compute-0 sudo[292172]: pam_unix(sudo:session): session closed for user root
Nov 24 18:54:42 compute-0 sudo[292197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:54:42 compute-0 sudo[292197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:54:42 compute-0 sudo[292197]: pam_unix(sudo:session): session closed for user root
Nov 24 18:54:42 compute-0 sudo[292222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:54:42 compute-0 sudo[292222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:54:42 compute-0 sudo[292222]: pam_unix(sudo:session): session closed for user root
Nov 24 18:54:42 compute-0 sudo[292247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 18:54:42 compute-0 sudo[292247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:54:43 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:54:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:54:43 compute-0 sudo[292247]: pam_unix(sudo:session): session closed for user root
Nov 24 18:54:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:54:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:54:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:54:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 18:54:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:54:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 18:54:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:54:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:54:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:54:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 18:54:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:54:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 18:54:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:54:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:54:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:54:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 18:54:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:54:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 18:54:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:54:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:54:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:54:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 18:54:43 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:54:43 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:54:43 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:54:43 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:54:43 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:54:43 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:54:43 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 7197bd52-fe64-4759-a5bf-216b948a3b5a does not exist
Nov 24 18:54:43 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 28008b84-01a8-4b4d-8f9a-60ae2d856c6f does not exist
Nov 24 18:54:43 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev eb8d64d3-6e0f-4785-b46f-7e295bf2814a does not exist
Nov 24 18:54:43 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 18:54:43 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:54:43 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 18:54:43 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:54:43 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:54:43 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:54:43 compute-0 sudo[292303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:54:43 compute-0 sudo[292303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:54:43 compute-0 sudo[292303]: pam_unix(sudo:session): session closed for user root
Nov 24 18:54:43 compute-0 sudo[292328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:54:43 compute-0 sudo[292328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:54:43 compute-0 sudo[292328]: pam_unix(sudo:session): session closed for user root
Nov 24 18:54:43 compute-0 sudo[292353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:54:43 compute-0 sudo[292353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:54:43 compute-0 sudo[292353]: pam_unix(sudo:session): session closed for user root
Nov 24 18:54:43 compute-0 sudo[292378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 18:54:43 compute-0 sudo[292378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:54:43 compute-0 ceph-mon[74927]: pgmap v1205: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:43 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:54:43 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:54:43 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:54:43 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:54:43 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:54:43 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:54:43 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1206: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:44 compute-0 podman[292445]: 2025-11-24 18:54:44.015015844 +0000 UTC m=+0.066352601 container create 17f91eb6244f14ebdb3968059695ac9a020c02656f2869d5c43671753f90e5c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_napier, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 18:54:44 compute-0 systemd[1]: Started libpod-conmon-17f91eb6244f14ebdb3968059695ac9a020c02656f2869d5c43671753f90e5c8.scope.
Nov 24 18:54:44 compute-0 podman[292445]: 2025-11-24 18:54:43.975469373 +0000 UTC m=+0.026806090 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:54:44 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:54:44 compute-0 podman[292445]: 2025-11-24 18:54:44.100835173 +0000 UTC m=+0.152171910 container init 17f91eb6244f14ebdb3968059695ac9a020c02656f2869d5c43671753f90e5c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 24 18:54:44 compute-0 podman[292445]: 2025-11-24 18:54:44.113365271 +0000 UTC m=+0.164702028 container start 17f91eb6244f14ebdb3968059695ac9a020c02656f2869d5c43671753f90e5c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_napier, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:54:44 compute-0 podman[292445]: 2025-11-24 18:54:44.117686317 +0000 UTC m=+0.169023064 container attach 17f91eb6244f14ebdb3968059695ac9a020c02656f2869d5c43671753f90e5c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Nov 24 18:54:44 compute-0 amazing_napier[292461]: 167 167
Nov 24 18:54:44 compute-0 systemd[1]: libpod-17f91eb6244f14ebdb3968059695ac9a020c02656f2869d5c43671753f90e5c8.scope: Deactivated successfully.
Nov 24 18:54:44 compute-0 podman[292445]: 2025-11-24 18:54:44.119979374 +0000 UTC m=+0.171316141 container died 17f91eb6244f14ebdb3968059695ac9a020c02656f2869d5c43671753f90e5c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_napier, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 18:54:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9f65cd8ae298581a06beb9ff8d87903a7521d8ee8cedefd74c4978b7e822fcf-merged.mount: Deactivated successfully.
Nov 24 18:54:44 compute-0 podman[292445]: 2025-11-24 18:54:44.167412399 +0000 UTC m=+0.218749116 container remove 17f91eb6244f14ebdb3968059695ac9a020c02656f2869d5c43671753f90e5c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 18:54:44 compute-0 systemd[1]: libpod-conmon-17f91eb6244f14ebdb3968059695ac9a020c02656f2869d5c43671753f90e5c8.scope: Deactivated successfully.
Nov 24 18:54:44 compute-0 podman[292486]: 2025-11-24 18:54:44.324172081 +0000 UTC m=+0.036531269 container create a8101e6bccbf64f3b1daf3c70cf9cc8ddf8fe497a607039548d087f017eb62af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ritchie, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 18:54:44 compute-0 systemd[1]: Started libpod-conmon-a8101e6bccbf64f3b1daf3c70cf9cc8ddf8fe497a607039548d087f017eb62af.scope.
Nov 24 18:54:44 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:54:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfb46543163619c9ae3a59fa9ce8852bc8a18e1eeb7be6f4a88e2add165178ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:54:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfb46543163619c9ae3a59fa9ce8852bc8a18e1eeb7be6f4a88e2add165178ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:54:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfb46543163619c9ae3a59fa9ce8852bc8a18e1eeb7be6f4a88e2add165178ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:54:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfb46543163619c9ae3a59fa9ce8852bc8a18e1eeb7be6f4a88e2add165178ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:54:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfb46543163619c9ae3a59fa9ce8852bc8a18e1eeb7be6f4a88e2add165178ad/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:54:44 compute-0 podman[292486]: 2025-11-24 18:54:44.308788313 +0000 UTC m=+0.021147531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:54:44 compute-0 podman[292486]: 2025-11-24 18:54:44.412096391 +0000 UTC m=+0.124455629 container init a8101e6bccbf64f3b1daf3c70cf9cc8ddf8fe497a607039548d087f017eb62af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ritchie, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:54:44 compute-0 podman[292486]: 2025-11-24 18:54:44.417821442 +0000 UTC m=+0.130180640 container start a8101e6bccbf64f3b1daf3c70cf9cc8ddf8fe497a607039548d087f017eb62af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 18:54:44 compute-0 podman[292486]: 2025-11-24 18:54:44.422052796 +0000 UTC m=+0.134412044 container attach a8101e6bccbf64f3b1daf3c70cf9cc8ddf8fe497a607039548d087f017eb62af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ritchie, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:54:45 compute-0 great_ritchie[292503]: --> passed data devices: 0 physical, 3 LVM
Nov 24 18:54:45 compute-0 great_ritchie[292503]: --> relative data size: 1.0
Nov 24 18:54:45 compute-0 great_ritchie[292503]: --> All data devices are unavailable
Nov 24 18:54:45 compute-0 systemd[1]: libpod-a8101e6bccbf64f3b1daf3c70cf9cc8ddf8fe497a607039548d087f017eb62af.scope: Deactivated successfully.
Nov 24 18:54:45 compute-0 podman[292486]: 2025-11-24 18:54:45.385622662 +0000 UTC m=+1.097981900 container died a8101e6bccbf64f3b1daf3c70cf9cc8ddf8fe497a607039548d087f017eb62af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ritchie, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 24 18:54:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfb46543163619c9ae3a59fa9ce8852bc8a18e1eeb7be6f4a88e2add165178ad-merged.mount: Deactivated successfully.
Nov 24 18:54:45 compute-0 podman[292486]: 2025-11-24 18:54:45.441651969 +0000 UTC m=+1.154011167 container remove a8101e6bccbf64f3b1daf3c70cf9cc8ddf8fe497a607039548d087f017eb62af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 18:54:45 compute-0 systemd[1]: libpod-conmon-a8101e6bccbf64f3b1daf3c70cf9cc8ddf8fe497a607039548d087f017eb62af.scope: Deactivated successfully.
Nov 24 18:54:45 compute-0 sudo[292378]: pam_unix(sudo:session): session closed for user root
Nov 24 18:54:45 compute-0 sudo[292544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:54:45 compute-0 sudo[292544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:54:45 compute-0 sudo[292544]: pam_unix(sudo:session): session closed for user root
Nov 24 18:54:45 compute-0 sudo[292569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:54:45 compute-0 sudo[292569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:54:45 compute-0 sudo[292569]: pam_unix(sudo:session): session closed for user root
Nov 24 18:54:45 compute-0 sudo[292594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:54:45 compute-0 sudo[292594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:54:45 compute-0 sudo[292594]: pam_unix(sudo:session): session closed for user root
Nov 24 18:54:45 compute-0 sudo[292619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- lvm list --format json
Nov 24 18:54:45 compute-0 sudo[292619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:54:45 compute-0 ceph-mon[74927]: pgmap v1206: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:45 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1207: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:46 compute-0 podman[292685]: 2025-11-24 18:54:46.177860879 +0000 UTC m=+0.058440357 container create 75dc735ec4bc989a3a2d17e938b6c69bfcb1517a11e6c4677286b38e47ac8f13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_darwin, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 18:54:46 compute-0 systemd[1]: Started libpod-conmon-75dc735ec4bc989a3a2d17e938b6c69bfcb1517a11e6c4677286b38e47ac8f13.scope.
Nov 24 18:54:46 compute-0 podman[292685]: 2025-11-24 18:54:46.144989911 +0000 UTC m=+0.025569449 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:54:46 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:54:46 compute-0 podman[292685]: 2025-11-24 18:54:46.284623032 +0000 UTC m=+0.165202500 container init 75dc735ec4bc989a3a2d17e938b6c69bfcb1517a11e6c4677286b38e47ac8f13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_darwin, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 24 18:54:46 compute-0 podman[292685]: 2025-11-24 18:54:46.293512991 +0000 UTC m=+0.174092439 container start 75dc735ec4bc989a3a2d17e938b6c69bfcb1517a11e6c4677286b38e47ac8f13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_darwin, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:54:46 compute-0 podman[292685]: 2025-11-24 18:54:46.296684609 +0000 UTC m=+0.177264077 container attach 75dc735ec4bc989a3a2d17e938b6c69bfcb1517a11e6c4677286b38e47ac8f13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:54:46 compute-0 systemd[1]: libpod-75dc735ec4bc989a3a2d17e938b6c69bfcb1517a11e6c4677286b38e47ac8f13.scope: Deactivated successfully.
Nov 24 18:54:46 compute-0 dazzling_darwin[292701]: 167 167
Nov 24 18:54:46 compute-0 conmon[292701]: conmon 75dc735ec4bc989a3a2d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-75dc735ec4bc989a3a2d17e938b6c69bfcb1517a11e6c4677286b38e47ac8f13.scope/container/memory.events
Nov 24 18:54:46 compute-0 podman[292685]: 2025-11-24 18:54:46.302084661 +0000 UTC m=+0.182664109 container died 75dc735ec4bc989a3a2d17e938b6c69bfcb1517a11e6c4677286b38e47ac8f13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_darwin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 18:54:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-84f3ed0ec9a368f84fc33a83929ac21757af239c839a7fea0f887b05355a1ec3-merged.mount: Deactivated successfully.
Nov 24 18:54:46 compute-0 podman[292685]: 2025-11-24 18:54:46.347981509 +0000 UTC m=+0.228560947 container remove 75dc735ec4bc989a3a2d17e938b6c69bfcb1517a11e6c4677286b38e47ac8f13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_darwin, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:54:46 compute-0 systemd[1]: libpod-conmon-75dc735ec4bc989a3a2d17e938b6c69bfcb1517a11e6c4677286b38e47ac8f13.scope: Deactivated successfully.
Nov 24 18:54:46 compute-0 podman[292723]: 2025-11-24 18:54:46.551443049 +0000 UTC m=+0.073874007 container create d15aebf5377941f7c20211410a5bb836a172119446b07028894a592fa0b23810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_proskuriakova, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:54:46 compute-0 systemd[1]: Started libpod-conmon-d15aebf5377941f7c20211410a5bb836a172119446b07028894a592fa0b23810.scope.
Nov 24 18:54:46 compute-0 podman[292723]: 2025-11-24 18:54:46.520095248 +0000 UTC m=+0.042526246 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:54:46 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:54:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa043ab67367ad343ee3773d2c03ab1ef3a60b0c24cb3f080de96be0db142ac3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:54:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa043ab67367ad343ee3773d2c03ab1ef3a60b0c24cb3f080de96be0db142ac3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:54:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa043ab67367ad343ee3773d2c03ab1ef3a60b0c24cb3f080de96be0db142ac3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:54:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa043ab67367ad343ee3773d2c03ab1ef3a60b0c24cb3f080de96be0db142ac3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:54:46 compute-0 podman[292723]: 2025-11-24 18:54:46.653308721 +0000 UTC m=+0.175739779 container init d15aebf5377941f7c20211410a5bb836a172119446b07028894a592fa0b23810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_proskuriakova, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:54:46 compute-0 podman[292723]: 2025-11-24 18:54:46.669153591 +0000 UTC m=+0.191584569 container start d15aebf5377941f7c20211410a5bb836a172119446b07028894a592fa0b23810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_proskuriakova, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 24 18:54:46 compute-0 podman[292723]: 2025-11-24 18:54:46.675112207 +0000 UTC m=+0.197543195 container attach d15aebf5377941f7c20211410a5bb836a172119446b07028894a592fa0b23810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]: {
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:     "0": [
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:         {
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:             "devices": [
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:                 "/dev/loop3"
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:             ],
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:             "lv_name": "ceph_lv0",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:             "lv_size": "21470642176",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:             "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:             "name": "ceph_lv0",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:             "tags": {
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:                 "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:                 "ceph.cluster_name": "ceph",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:                 "ceph.crush_device_class": "",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:                 "ceph.encrypted": "0",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:                 "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:                 "ceph.osd_id": "0",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:                 "ceph.type": "block",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:                 "ceph.vdo": "0"
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:             },
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:             "type": "block",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:             "vg_name": "ceph_vg0"
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:         }
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:     ],
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:     "1": [
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:         {
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:             "devices": [
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:                 "/dev/loop4"
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:             ],
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:             "lv_name": "ceph_lv1",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:             "lv_size": "21470642176",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:             "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:             "name": "ceph_lv1",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:             "tags": {
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:                 "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:                 "ceph.cluster_name": "ceph",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:                 "ceph.crush_device_class": "",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:                 "ceph.encrypted": "0",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:                 "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:                 "ceph.osd_id": "1",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:                 "ceph.type": "block",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:                 "ceph.vdo": "0"
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:             },
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:             "type": "block",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:             "vg_name": "ceph_vg1"
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:         }
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:     ],
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:     "2": [
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:         {
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:             "devices": [
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:                 "/dev/loop5"
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:             ],
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:             "lv_name": "ceph_lv2",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:             "lv_size": "21470642176",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:             "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:             "name": "ceph_lv2",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:             "tags": {
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:                 "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:                 "ceph.cluster_name": "ceph",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:                 "ceph.crush_device_class": "",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:                 "ceph.encrypted": "0",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:                 "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:                 "ceph.osd_id": "2",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:                 "ceph.type": "block",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:                 "ceph.vdo": "0"
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:             },
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:             "type": "block",
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:             "vg_name": "ceph_vg2"
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:         }
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]:     ]
Nov 24 18:54:47 compute-0 serene_proskuriakova[292740]: }
Nov 24 18:54:47 compute-0 systemd[1]: libpod-d15aebf5377941f7c20211410a5bb836a172119446b07028894a592fa0b23810.scope: Deactivated successfully.
Nov 24 18:54:47 compute-0 podman[292723]: 2025-11-24 18:54:47.502128969 +0000 UTC m=+1.024559937 container died d15aebf5377941f7c20211410a5bb836a172119446b07028894a592fa0b23810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_proskuriakova, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:54:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa043ab67367ad343ee3773d2c03ab1ef3a60b0c24cb3f080de96be0db142ac3-merged.mount: Deactivated successfully.
Nov 24 18:54:47 compute-0 podman[292723]: 2025-11-24 18:54:47.568282404 +0000 UTC m=+1.090713352 container remove d15aebf5377941f7c20211410a5bb836a172119446b07028894a592fa0b23810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_proskuriakova, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:54:47 compute-0 systemd[1]: libpod-conmon-d15aebf5377941f7c20211410a5bb836a172119446b07028894a592fa0b23810.scope: Deactivated successfully.
Nov 24 18:54:47 compute-0 sudo[292619]: pam_unix(sudo:session): session closed for user root
Nov 24 18:54:47 compute-0 sudo[292764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:54:47 compute-0 sudo[292764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:54:47 compute-0 sudo[292764]: pam_unix(sudo:session): session closed for user root
Nov 24 18:54:47 compute-0 sudo[292789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:54:47 compute-0 sudo[292789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:54:47 compute-0 sudo[292789]: pam_unix(sudo:session): session closed for user root
Nov 24 18:54:47 compute-0 sudo[292814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:54:47 compute-0 sudo[292814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:54:47 compute-0 sudo[292814]: pam_unix(sudo:session): session closed for user root
Nov 24 18:54:47 compute-0 sudo[292839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- raw list --format json
Nov 24 18:54:47 compute-0 sudo[292839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:54:47 compute-0 ceph-mon[74927]: pgmap v1207: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:47 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1208: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:48 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:54:48 compute-0 podman[292902]: 2025-11-24 18:54:48.229318777 +0000 UTC m=+0.042133006 container create e079fa7d8f5a1c57041ce2143e3d740ae041a2bb2f2b474c226341b1eb0b6204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:54:48 compute-0 systemd[1]: Started libpod-conmon-e079fa7d8f5a1c57041ce2143e3d740ae041a2bb2f2b474c226341b1eb0b6204.scope.
Nov 24 18:54:48 compute-0 podman[292902]: 2025-11-24 18:54:48.209959142 +0000 UTC m=+0.022773341 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:54:48 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:54:48 compute-0 podman[292902]: 2025-11-24 18:54:48.32384774 +0000 UTC m=+0.136661959 container init e079fa7d8f5a1c57041ce2143e3d740ae041a2bb2f2b474c226341b1eb0b6204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:54:48 compute-0 podman[292902]: 2025-11-24 18:54:48.331505648 +0000 UTC m=+0.144319877 container start e079fa7d8f5a1c57041ce2143e3d740ae041a2bb2f2b474c226341b1eb0b6204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_pare, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:54:48 compute-0 podman[292902]: 2025-11-24 18:54:48.335513737 +0000 UTC m=+0.148327976 container attach e079fa7d8f5a1c57041ce2143e3d740ae041a2bb2f2b474c226341b1eb0b6204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_pare, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:54:48 compute-0 fervent_pare[292918]: 167 167
Nov 24 18:54:48 compute-0 systemd[1]: libpod-e079fa7d8f5a1c57041ce2143e3d740ae041a2bb2f2b474c226341b1eb0b6204.scope: Deactivated successfully.
Nov 24 18:54:48 compute-0 podman[292902]: 2025-11-24 18:54:48.337857244 +0000 UTC m=+0.150671443 container died e079fa7d8f5a1c57041ce2143e3d740ae041a2bb2f2b474c226341b1eb0b6204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_pare, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Nov 24 18:54:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c157e2c162ae628de4f016b44ccc195922960f6bcffa479a2249332d9e2dd86-merged.mount: Deactivated successfully.
Nov 24 18:54:48 compute-0 podman[292902]: 2025-11-24 18:54:48.381002154 +0000 UTC m=+0.193816353 container remove e079fa7d8f5a1c57041ce2143e3d740ae041a2bb2f2b474c226341b1eb0b6204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_pare, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 24 18:54:48 compute-0 systemd[1]: libpod-conmon-e079fa7d8f5a1c57041ce2143e3d740ae041a2bb2f2b474c226341b1eb0b6204.scope: Deactivated successfully.
Nov 24 18:54:48 compute-0 podman[292942]: 2025-11-24 18:54:48.616358928 +0000 UTC m=+0.075269341 container create 30da490ee68be73f9528dcdb36f1f65e8520508000af671ec770398f625c8bf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_tesla, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:54:48 compute-0 systemd[1]: Started libpod-conmon-30da490ee68be73f9528dcdb36f1f65e8520508000af671ec770398f625c8bf8.scope.
Nov 24 18:54:48 compute-0 podman[292942]: 2025-11-24 18:54:48.585995722 +0000 UTC m=+0.044906195 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:54:48 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:54:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd4b6b2a7862c2189e93683b4ff712850daf925761fecff67f94bc679a36db0a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:54:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd4b6b2a7862c2189e93683b4ff712850daf925761fecff67f94bc679a36db0a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:54:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd4b6b2a7862c2189e93683b4ff712850daf925761fecff67f94bc679a36db0a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:54:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd4b6b2a7862c2189e93683b4ff712850daf925761fecff67f94bc679a36db0a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:54:48 compute-0 podman[292942]: 2025-11-24 18:54:48.724396993 +0000 UTC m=+0.183307456 container init 30da490ee68be73f9528dcdb36f1f65e8520508000af671ec770398f625c8bf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_tesla, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:54:48 compute-0 podman[292942]: 2025-11-24 18:54:48.735804333 +0000 UTC m=+0.194714756 container start 30da490ee68be73f9528dcdb36f1f65e8520508000af671ec770398f625c8bf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_tesla, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Nov 24 18:54:48 compute-0 podman[292942]: 2025-11-24 18:54:48.73974868 +0000 UTC m=+0.198659123 container attach 30da490ee68be73f9528dcdb36f1f65e8520508000af671ec770398f625c8bf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:54:49 compute-0 vigilant_tesla[292958]: {
Nov 24 18:54:49 compute-0 vigilant_tesla[292958]:     "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 18:54:49 compute-0 vigilant_tesla[292958]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:54:49 compute-0 vigilant_tesla[292958]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 18:54:49 compute-0 vigilant_tesla[292958]:         "osd_id": 0,
Nov 24 18:54:49 compute-0 vigilant_tesla[292958]:         "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:54:49 compute-0 vigilant_tesla[292958]:         "type": "bluestore"
Nov 24 18:54:49 compute-0 vigilant_tesla[292958]:     },
Nov 24 18:54:49 compute-0 vigilant_tesla[292958]:     "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 18:54:49 compute-0 vigilant_tesla[292958]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:54:49 compute-0 vigilant_tesla[292958]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 18:54:49 compute-0 vigilant_tesla[292958]:         "osd_id": 1,
Nov 24 18:54:49 compute-0 vigilant_tesla[292958]:         "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:54:49 compute-0 vigilant_tesla[292958]:         "type": "bluestore"
Nov 24 18:54:49 compute-0 vigilant_tesla[292958]:     },
Nov 24 18:54:49 compute-0 vigilant_tesla[292958]:     "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 18:54:49 compute-0 vigilant_tesla[292958]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:54:49 compute-0 vigilant_tesla[292958]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 18:54:49 compute-0 vigilant_tesla[292958]:         "osd_id": 2,
Nov 24 18:54:49 compute-0 vigilant_tesla[292958]:         "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:54:49 compute-0 vigilant_tesla[292958]:         "type": "bluestore"
Nov 24 18:54:49 compute-0 vigilant_tesla[292958]:     }
Nov 24 18:54:49 compute-0 vigilant_tesla[292958]: }
Nov 24 18:54:49 compute-0 systemd[1]: libpod-30da490ee68be73f9528dcdb36f1f65e8520508000af671ec770398f625c8bf8.scope: Deactivated successfully.
Nov 24 18:54:49 compute-0 conmon[292958]: conmon 30da490ee68be73f9528 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-30da490ee68be73f9528dcdb36f1f65e8520508000af671ec770398f625c8bf8.scope/container/memory.events
Nov 24 18:54:49 compute-0 podman[292942]: 2025-11-24 18:54:49.695854142 +0000 UTC m=+1.154764535 container died 30da490ee68be73f9528dcdb36f1f65e8520508000af671ec770398f625c8bf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_tesla, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:54:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd4b6b2a7862c2189e93683b4ff712850daf925761fecff67f94bc679a36db0a-merged.mount: Deactivated successfully.
Nov 24 18:54:49 compute-0 podman[292942]: 2025-11-24 18:54:49.747106831 +0000 UTC m=+1.206017214 container remove 30da490ee68be73f9528dcdb36f1f65e8520508000af671ec770398f625c8bf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_tesla, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 18:54:49 compute-0 systemd[1]: libpod-conmon-30da490ee68be73f9528dcdb36f1f65e8520508000af671ec770398f625c8bf8.scope: Deactivated successfully.
Nov 24 18:54:49 compute-0 sudo[292839]: pam_unix(sudo:session): session closed for user root
Nov 24 18:54:49 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:54:49 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:54:49 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:54:49 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:54:49 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 207368e3-91ff-40a1-88d7-d199b34eba9e does not exist
Nov 24 18:54:49 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev a485c534-c8df-47e3-bbef-4ef70eed0778 does not exist
Nov 24 18:54:49 compute-0 sudo[293001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:54:49 compute-0 sudo[293001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:54:49 compute-0 sudo[293001]: pam_unix(sudo:session): session closed for user root
Nov 24 18:54:49 compute-0 sudo[293026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:54:49 compute-0 sudo[293026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:54:49 compute-0 ceph-mon[74927]: pgmap v1208: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:49 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:54:49 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:54:49 compute-0 sudo[293026]: pam_unix(sudo:session): session closed for user root
Nov 24 18:54:49 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1209: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:51 compute-0 ceph-mon[74927]: pgmap v1209: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:51 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1210: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:53 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:54:53 compute-0 ceph-mon[74927]: pgmap v1210: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:53 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1211: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:55 compute-0 ceph-mon[74927]: pgmap v1211: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:55 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1212: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:57 compute-0 ceph-mon[74927]: pgmap v1212: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:57 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1213: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:58 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:54:59 compute-0 ceph-mon[74927]: pgmap v1213: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:54:59 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1214: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:01 compute-0 ceph-mon[74927]: pgmap v1214: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:01 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1215: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:03 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:55:03 compute-0 ceph-mon[74927]: pgmap v1215: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:03 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1216: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:55:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:55:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:55:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:55:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:55:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:55:04 compute-0 ceph-mon[74927]: pgmap v1216: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:05 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1217: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:07 compute-0 ceph-mon[74927]: pgmap v1217: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:07 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1218: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:08 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:55:09 compute-0 ceph-mon[74927]: pgmap v1218: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:09 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1219: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:11 compute-0 ceph-mon[74927]: pgmap v1219: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:11 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1220: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:12 compute-0 podman[293051]: 2025-11-24 18:55:12.971545006 +0000 UTC m=+0.063917054 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 18:55:12 compute-0 podman[293053]: 2025-11-24 18:55:12.974813996 +0000 UTC m=+0.062267734 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 24 18:55:13 compute-0 ceph-mon[74927]: pgmap v1220: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:13 compute-0 podman[293052]: 2025-11-24 18:55:13.062785297 +0000 UTC m=+0.151736431 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Nov 24 18:55:13 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:55:13 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1221: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:14 compute-0 nova_compute[270693]: 2025-11-24 18:55:14.528 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:55:14 compute-0 nova_compute[270693]: 2025-11-24 18:55:14.528 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 18:55:14 compute-0 nova_compute[270693]: 2025-11-24 18:55:14.529 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 18:55:14 compute-0 nova_compute[270693]: 2025-11-24 18:55:14.580 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 18:55:14 compute-0 nova_compute[270693]: 2025-11-24 18:55:14.581 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:55:14 compute-0 nova_compute[270693]: 2025-11-24 18:55:14.611 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:55:14 compute-0 nova_compute[270693]: 2025-11-24 18:55:14.612 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:55:14 compute-0 nova_compute[270693]: 2025-11-24 18:55:14.612 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:55:14 compute-0 nova_compute[270693]: 2025-11-24 18:55:14.612 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 18:55:14 compute-0 nova_compute[270693]: 2025-11-24 18:55:14.613 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:55:15 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 18:55:15 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1503757313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:55:15 compute-0 nova_compute[270693]: 2025-11-24 18:55:15.027 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:55:15 compute-0 ceph-mon[74927]: pgmap v1221: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:15 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1503757313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:55:15 compute-0 nova_compute[270693]: 2025-11-24 18:55:15.189 270697 WARNING nova.virt.libvirt.driver [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 18:55:15 compute-0 nova_compute[270693]: 2025-11-24 18:55:15.191 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5002MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 18:55:15 compute-0 nova_compute[270693]: 2025-11-24 18:55:15.191 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:55:15 compute-0 nova_compute[270693]: 2025-11-24 18:55:15.191 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:55:15 compute-0 nova_compute[270693]: 2025-11-24 18:55:15.270 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 18:55:15 compute-0 nova_compute[270693]: 2025-11-24 18:55:15.270 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 18:55:15 compute-0 nova_compute[270693]: 2025-11-24 18:55:15.289 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:55:15 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 18:55:15 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3702818248' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:55:15 compute-0 nova_compute[270693]: 2025-11-24 18:55:15.679 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.391s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:55:15 compute-0 nova_compute[270693]: 2025-11-24 18:55:15.686 270697 DEBUG nova.compute.provider_tree [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed in ProviderTree for provider: d1cce7ec-de83-4810-91f8-1852891da8a6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 18:55:15 compute-0 nova_compute[270693]: 2025-11-24 18:55:15.705 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed for provider d1cce7ec-de83-4810-91f8-1852891da8a6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 18:55:15 compute-0 nova_compute[270693]: 2025-11-24 18:55:15.707 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 18:55:15 compute-0 nova_compute[270693]: 2025-11-24 18:55:15.707 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.516s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:55:15 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1222: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:16 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3702818248' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:55:16 compute-0 rsyslogd[1008]: imjournal: 15850 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 24 18:55:16 compute-0 nova_compute[270693]: 2025-11-24 18:55:16.703 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:55:16 compute-0 nova_compute[270693]: 2025-11-24 18:55:16.704 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:55:17 compute-0 ceph-mon[74927]: pgmap v1222: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:17 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1223: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:18 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:55:18 compute-0 nova_compute[270693]: 2025-11-24 18:55:18.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:55:18 compute-0 nova_compute[270693]: 2025-11-24 18:55:18.529 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 18:55:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 18:55:19 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2092914081' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 18:55:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 18:55:19 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2092914081' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 18:55:19 compute-0 ceph-mon[74927]: pgmap v1223: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:19 compute-0 ceph-mon[74927]: from='client.? 192.168.122.10:0/2092914081' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 18:55:19 compute-0 ceph-mon[74927]: from='client.? 192.168.122.10:0/2092914081' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 18:55:19 compute-0 nova_compute[270693]: 2025-11-24 18:55:19.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:55:19 compute-0 nova_compute[270693]: 2025-11-24 18:55:19.530 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:55:19 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1224: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:20 compute-0 nova_compute[270693]: 2025-11-24 18:55:20.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:55:20 compute-0 nova_compute[270693]: 2025-11-24 18:55:20.530 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:55:21 compute-0 ceph-mon[74927]: pgmap v1224: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:21 compute-0 nova_compute[270693]: 2025-11-24 18:55:21.527 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:55:21 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1225: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:55:22.752 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:55:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:55:22.752 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:55:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:55:22.752 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:55:23 compute-0 ceph-mon[74927]: pgmap v1225: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:23 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:55:23 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1226: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:25 compute-0 ceph-mon[74927]: pgmap v1226: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:25 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1227: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:27 compute-0 ceph-mon[74927]: pgmap v1227: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:27 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1228: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:28 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:55:29 compute-0 ceph-mon[74927]: pgmap v1228: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:29 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1229: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:31 compute-0 ceph-mon[74927]: pgmap v1229: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:31 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Nov 24 18:55:31 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:55:31.168323) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 18:55:31 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Nov 24 18:55:31 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010531168379, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1530, "num_deletes": 501, "total_data_size": 1978710, "memory_usage": 2007920, "flush_reason": "Manual Compaction"}
Nov 24 18:55:31 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Nov 24 18:55:31 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010531183139, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1934495, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24360, "largest_seqno": 25889, "table_properties": {"data_size": 1927782, "index_size": 3403, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 17528, "raw_average_key_size": 19, "raw_value_size": 1912341, "raw_average_value_size": 2141, "num_data_blocks": 153, "num_entries": 893, "num_filter_entries": 893, "num_deletions": 501, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764010399, "oldest_key_time": 1764010399, "file_creation_time": 1764010531, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Nov 24 18:55:31 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 14892 microseconds, and 8858 cpu microseconds.
Nov 24 18:55:31 compute-0 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 18:55:31 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:55:31.183213) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1934495 bytes OK
Nov 24 18:55:31 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:55:31.183244) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Nov 24 18:55:31 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:55:31.185062) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Nov 24 18:55:31 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:55:31.185092) EVENT_LOG_v1 {"time_micros": 1764010531185082, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 18:55:31 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:55:31.185190) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 18:55:31 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1970955, prev total WAL file size 1970955, number of live WAL files 2.
Nov 24 18:55:31 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:55:31 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:55:31.186439) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Nov 24 18:55:31 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 18:55:31 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1889KB)], [56(10MB)]
Nov 24 18:55:31 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010531186490, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 12430084, "oldest_snapshot_seqno": -1}
Nov 24 18:55:31 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 4829 keys, 7429202 bytes, temperature: kUnknown
Nov 24 18:55:31 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010531245160, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 7429202, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7396567, "index_size": 19469, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12101, "raw_key_size": 121324, "raw_average_key_size": 25, "raw_value_size": 7308802, "raw_average_value_size": 1513, "num_data_blocks": 802, "num_entries": 4829, "num_filter_entries": 4829, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008324, "oldest_key_time": 0, "file_creation_time": 1764010531, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Nov 24 18:55:31 compute-0 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 18:55:31 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:55:31.245529) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7429202 bytes
Nov 24 18:55:31 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:55:31.246921) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 211.5 rd, 126.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 10.0 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(10.3) write-amplify(3.8) OK, records in: 5843, records dropped: 1014 output_compression: NoCompression
Nov 24 18:55:31 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:55:31.246968) EVENT_LOG_v1 {"time_micros": 1764010531246951, "job": 30, "event": "compaction_finished", "compaction_time_micros": 58778, "compaction_time_cpu_micros": 33868, "output_level": 6, "num_output_files": 1, "total_output_size": 7429202, "num_input_records": 5843, "num_output_records": 4829, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 18:55:31 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:55:31 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010531247366, "job": 30, "event": "table_file_deletion", "file_number": 58}
Nov 24 18:55:31 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:55:31 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010531249092, "job": 30, "event": "table_file_deletion", "file_number": 56}
Nov 24 18:55:31 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:55:31.186317) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:55:31 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:55:31.249147) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:55:31 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:55:31.249154) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:55:31 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:55:31.249157) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:55:31 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:55:31.249160) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:55:31 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:55:31.249162) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:55:31 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1230: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:33 compute-0 ceph-mon[74927]: pgmap v1230: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:33 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:55:33 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1231: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:34 compute-0 nova_compute[270693]: 2025-11-24 18:55:34.146 270697 DEBUG oslo_concurrency.processutils [None req-6ece738e-cb4a-43c0-9acf-dac429c31015 129aaec41c194fc181333dedde345fb5 a9452fe831594f6ba61571a76d883af5 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:55:34 compute-0 nova_compute[270693]: 2025-11-24 18:55:34.171 270697 DEBUG oslo_concurrency.processutils [None req-6ece738e-cb4a-43c0-9acf-dac429c31015 129aaec41c194fc181333dedde345fb5 a9452fe831594f6ba61571a76d883af5 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:55:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:55:34
Nov 24 18:55:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:55:34 compute-0 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 18:55:34 compute-0 ceph-mgr[75218]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'default.rgw.meta', 'volumes', '.rgw.root', '.mgr']
Nov 24 18:55:34 compute-0 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 18:55:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:55:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:55:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:55:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:55:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:55:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:55:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:55:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:55:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:55:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:55:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:55:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:55:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:55:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:55:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:55:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:55:35 compute-0 ceph-mon[74927]: pgmap v1231: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:35 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1232: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:37 compute-0 ceph-mon[74927]: pgmap v1232: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:37 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1233: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:38 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:55:39 compute-0 ceph-mon[74927]: pgmap v1233: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:39 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1234: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:41 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:55:41.086 179763 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:2b:64', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'fa:26:5b:32:fa:ba'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 18:55:41 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:55:41.088 179763 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 18:55:41 compute-0 ceph-mon[74927]: pgmap v1234: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:41 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1235: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:43 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:55:43 compute-0 ceph-mon[74927]: pgmap v1235: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:55:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:55:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:55:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:55:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 18:55:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:55:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 18:55:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:55:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:55:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:55:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 18:55:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:55:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 18:55:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:55:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:55:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:55:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 18:55:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:55:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 18:55:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:55:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:55:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:55:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 18:55:43 compute-0 podman[293160]: 2025-11-24 18:55:43.983607222 +0000 UTC m=+0.078642603 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 24 18:55:43 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1236: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:44 compute-0 podman[293162]: 2025-11-24 18:55:44.006196325 +0000 UTC m=+0.082960730 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 24 18:55:44 compute-0 podman[293161]: 2025-11-24 18:55:44.026611114 +0000 UTC m=+0.114347377 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 18:55:45 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:55:45.089 179763 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=302e9f34-0427-4ff9-a29b-2fc7b5250666, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 18:55:45 compute-0 ceph-mon[74927]: pgmap v1236: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:46 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1237: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:47 compute-0 ceph-mon[74927]: pgmap v1237: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:48 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1238: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:48 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:55:49 compute-0 ceph-mon[74927]: pgmap v1238: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:49 compute-0 sudo[293221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:55:49 compute-0 sudo[293221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:55:49 compute-0 sudo[293221]: pam_unix(sudo:session): session closed for user root
Nov 24 18:55:50 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1239: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:50 compute-0 sudo[293246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:55:50 compute-0 sudo[293246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:55:50 compute-0 sudo[293246]: pam_unix(sudo:session): session closed for user root
Nov 24 18:55:50 compute-0 sudo[293271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:55:50 compute-0 sudo[293271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:55:50 compute-0 sudo[293271]: pam_unix(sudo:session): session closed for user root
Nov 24 18:55:50 compute-0 sudo[293296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 18:55:50 compute-0 sudo[293296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:55:50 compute-0 sudo[293296]: pam_unix(sudo:session): session closed for user root
Nov 24 18:55:50 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:55:50 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:55:50 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:55:50 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:55:50 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:55:50 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:55:50 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 6c3bd060-c136-42a9-b9e8-916bef5a3198 does not exist
Nov 24 18:55:50 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev da245d16-190c-4a9d-adc6-82437747e5d7 does not exist
Nov 24 18:55:50 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 6c32bbf6-64b9-4ec2-9494-635e2f78484f does not exist
Nov 24 18:55:50 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 18:55:50 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:55:50 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 18:55:50 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:55:50 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:55:50 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:55:50 compute-0 sudo[293354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:55:50 compute-0 sudo[293354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:55:50 compute-0 sudo[293354]: pam_unix(sudo:session): session closed for user root
Nov 24 18:55:50 compute-0 sudo[293379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:55:50 compute-0 sudo[293379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:55:50 compute-0 sudo[293379]: pam_unix(sudo:session): session closed for user root
Nov 24 18:55:51 compute-0 sudo[293404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:55:51 compute-0 sudo[293404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:55:51 compute-0 sudo[293404]: pam_unix(sudo:session): session closed for user root
Nov 24 18:55:51 compute-0 sudo[293429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 18:55:51 compute-0 sudo[293429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:55:51 compute-0 ceph-mon[74927]: pgmap v1239: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:51 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:55:51 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:55:51 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:55:51 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:55:51 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:55:51 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:55:51 compute-0 podman[293496]: 2025-11-24 18:55:51.392063172 +0000 UTC m=+0.038104552 container create b9610bc2ab4f358201fdd642c7c3ce8437caa8e8b86f9de14130315df0152152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Nov 24 18:55:51 compute-0 systemd[1]: Started libpod-conmon-b9610bc2ab4f358201fdd642c7c3ce8437caa8e8b86f9de14130315df0152152.scope.
Nov 24 18:55:51 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:55:51 compute-0 podman[293496]: 2025-11-24 18:55:51.46475748 +0000 UTC m=+0.110798850 container init b9610bc2ab4f358201fdd642c7c3ce8437caa8e8b86f9de14130315df0152152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_fermi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Nov 24 18:55:51 compute-0 podman[293496]: 2025-11-24 18:55:51.373250512 +0000 UTC m=+0.019291902 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:55:51 compute-0 podman[293496]: 2025-11-24 18:55:51.472059348 +0000 UTC m=+0.118100718 container start b9610bc2ab4f358201fdd642c7c3ce8437caa8e8b86f9de14130315df0152152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_fermi, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 24 18:55:51 compute-0 podman[293496]: 2025-11-24 18:55:51.475294537 +0000 UTC m=+0.121335917 container attach b9610bc2ab4f358201fdd642c7c3ce8437caa8e8b86f9de14130315df0152152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 18:55:51 compute-0 blissful_fermi[293513]: 167 167
Nov 24 18:55:51 compute-0 systemd[1]: libpod-b9610bc2ab4f358201fdd642c7c3ce8437caa8e8b86f9de14130315df0152152.scope: Deactivated successfully.
Nov 24 18:55:51 compute-0 podman[293496]: 2025-11-24 18:55:51.477099181 +0000 UTC m=+0.123140562 container died b9610bc2ab4f358201fdd642c7c3ce8437caa8e8b86f9de14130315df0152152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_fermi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:55:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e491108407580a37cf6b65dc11a77c67c5d145428d56be8862db381261b56b8-merged.mount: Deactivated successfully.
Nov 24 18:55:51 compute-0 podman[293496]: 2025-11-24 18:55:51.512725253 +0000 UTC m=+0.158766623 container remove b9610bc2ab4f358201fdd642c7c3ce8437caa8e8b86f9de14130315df0152152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 18:55:51 compute-0 systemd[1]: libpod-conmon-b9610bc2ab4f358201fdd642c7c3ce8437caa8e8b86f9de14130315df0152152.scope: Deactivated successfully.
Nov 24 18:55:51 compute-0 podman[293537]: 2025-11-24 18:55:51.717027738 +0000 UTC m=+0.061390802 container create 992494f45183997d4c999a6854683202296ca1efcbeca431f5e40ce8eae046b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_sammet, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 18:55:51 compute-0 systemd[1]: Started libpod-conmon-992494f45183997d4c999a6854683202296ca1efcbeca431f5e40ce8eae046b1.scope.
Nov 24 18:55:51 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:55:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe76f457d31beed41a31da74d42b353dbab6b9fb80c01040c8cdb1058464435/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:55:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe76f457d31beed41a31da74d42b353dbab6b9fb80c01040c8cdb1058464435/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:55:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe76f457d31beed41a31da74d42b353dbab6b9fb80c01040c8cdb1058464435/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:55:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe76f457d31beed41a31da74d42b353dbab6b9fb80c01040c8cdb1058464435/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:55:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe76f457d31beed41a31da74d42b353dbab6b9fb80c01040c8cdb1058464435/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:55:51 compute-0 podman[293537]: 2025-11-24 18:55:51.696392103 +0000 UTC m=+0.040755197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:55:51 compute-0 podman[293537]: 2025-11-24 18:55:51.800276003 +0000 UTC m=+0.144639127 container init 992494f45183997d4c999a6854683202296ca1efcbeca431f5e40ce8eae046b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_sammet, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:55:51 compute-0 podman[293537]: 2025-11-24 18:55:51.811560579 +0000 UTC m=+0.155923653 container start 992494f45183997d4c999a6854683202296ca1efcbeca431f5e40ce8eae046b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_sammet, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 24 18:55:51 compute-0 podman[293537]: 2025-11-24 18:55:51.815390223 +0000 UTC m=+0.159753307 container attach 992494f45183997d4c999a6854683202296ca1efcbeca431f5e40ce8eae046b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_sammet, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 24 18:55:52 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1240: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:52 compute-0 cranky_sammet[293554]: --> passed data devices: 0 physical, 3 LVM
Nov 24 18:55:52 compute-0 cranky_sammet[293554]: --> relative data size: 1.0
Nov 24 18:55:52 compute-0 cranky_sammet[293554]: --> All data devices are unavailable
Nov 24 18:55:52 compute-0 systemd[1]: libpod-992494f45183997d4c999a6854683202296ca1efcbeca431f5e40ce8eae046b1.scope: Deactivated successfully.
Nov 24 18:55:52 compute-0 podman[293537]: 2025-11-24 18:55:52.75348125 +0000 UTC m=+1.097844294 container died 992494f45183997d4c999a6854683202296ca1efcbeca431f5e40ce8eae046b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 24 18:55:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-1fe76f457d31beed41a31da74d42b353dbab6b9fb80c01040c8cdb1058464435-merged.mount: Deactivated successfully.
Nov 24 18:55:52 compute-0 podman[293537]: 2025-11-24 18:55:52.811368725 +0000 UTC m=+1.155731769 container remove 992494f45183997d4c999a6854683202296ca1efcbeca431f5e40ce8eae046b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:55:52 compute-0 systemd[1]: libpod-conmon-992494f45183997d4c999a6854683202296ca1efcbeca431f5e40ce8eae046b1.scope: Deactivated successfully.
Nov 24 18:55:52 compute-0 sudo[293429]: pam_unix(sudo:session): session closed for user root
Nov 24 18:55:52 compute-0 sudo[293595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:55:52 compute-0 sudo[293595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:55:52 compute-0 sudo[293595]: pam_unix(sudo:session): session closed for user root
Nov 24 18:55:52 compute-0 sudo[293620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:55:52 compute-0 sudo[293620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:55:52 compute-0 sudo[293620]: pam_unix(sudo:session): session closed for user root
Nov 24 18:55:53 compute-0 sudo[293645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:55:53 compute-0 sudo[293645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:55:53 compute-0 sudo[293645]: pam_unix(sudo:session): session closed for user root
Nov 24 18:55:53 compute-0 sudo[293670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- lvm list --format json
Nov 24 18:55:53 compute-0 sudo[293670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:55:53 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:55:53 compute-0 ceph-mon[74927]: pgmap v1240: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:53 compute-0 podman[293735]: 2025-11-24 18:55:53.375346294 +0000 UTC m=+0.042471059 container create 1617e69b0f72614e4ff313417bb08bb4b0f94d852b7b84c142354164caecbae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 24 18:55:53 compute-0 systemd[1]: Started libpod-conmon-1617e69b0f72614e4ff313417bb08bb4b0f94d852b7b84c142354164caecbae3.scope.
Nov 24 18:55:53 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:55:53 compute-0 podman[293735]: 2025-11-24 18:55:53.354583226 +0000 UTC m=+0.021708081 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:55:53 compute-0 podman[293735]: 2025-11-24 18:55:53.459516272 +0000 UTC m=+0.126641087 container init 1617e69b0f72614e4ff313417bb08bb4b0f94d852b7b84c142354164caecbae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_knuth, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 18:55:53 compute-0 podman[293735]: 2025-11-24 18:55:53.465823386 +0000 UTC m=+0.132948161 container start 1617e69b0f72614e4ff313417bb08bb4b0f94d852b7b84c142354164caecbae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:55:53 compute-0 podman[293735]: 2025-11-24 18:55:53.46925411 +0000 UTC m=+0.136378895 container attach 1617e69b0f72614e4ff313417bb08bb4b0f94d852b7b84c142354164caecbae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_knuth, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:55:53 compute-0 gifted_knuth[293751]: 167 167
Nov 24 18:55:53 compute-0 systemd[1]: libpod-1617e69b0f72614e4ff313417bb08bb4b0f94d852b7b84c142354164caecbae3.scope: Deactivated successfully.
Nov 24 18:55:53 compute-0 podman[293735]: 2025-11-24 18:55:53.473712709 +0000 UTC m=+0.140837534 container died 1617e69b0f72614e4ff313417bb08bb4b0f94d852b7b84c142354164caecbae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_knuth, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 18:55:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-8824c7643a013a342276f1dbb0f38be000e5748fc3b1b9e46242d1b2b6868673-merged.mount: Deactivated successfully.
Nov 24 18:55:53 compute-0 podman[293735]: 2025-11-24 18:55:53.516569147 +0000 UTC m=+0.183693912 container remove 1617e69b0f72614e4ff313417bb08bb4b0f94d852b7b84c142354164caecbae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_knuth, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:55:53 compute-0 systemd[1]: libpod-conmon-1617e69b0f72614e4ff313417bb08bb4b0f94d852b7b84c142354164caecbae3.scope: Deactivated successfully.
Nov 24 18:55:53 compute-0 podman[293775]: 2025-11-24 18:55:53.671802492 +0000 UTC m=+0.041865504 container create 7395125d717f88ca8e4999c5725265641ee7ef2dc47310d21a6bfca5a5033c5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:55:53 compute-0 systemd[1]: Started libpod-conmon-7395125d717f88ca8e4999c5725265641ee7ef2dc47310d21a6bfca5a5033c5b.scope.
Nov 24 18:55:53 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:55:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b88106b21d16ecbbd28e6435fd076af3ce391a1a7e725d9b822248d3998cf7c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:55:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b88106b21d16ecbbd28e6435fd076af3ce391a1a7e725d9b822248d3998cf7c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:55:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b88106b21d16ecbbd28e6435fd076af3ce391a1a7e725d9b822248d3998cf7c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:55:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b88106b21d16ecbbd28e6435fd076af3ce391a1a7e725d9b822248d3998cf7c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:55:53 compute-0 podman[293775]: 2025-11-24 18:55:53.651442645 +0000 UTC m=+0.021505637 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:55:53 compute-0 podman[293775]: 2025-11-24 18:55:53.75061794 +0000 UTC m=+0.120680912 container init 7395125d717f88ca8e4999c5725265641ee7ef2dc47310d21a6bfca5a5033c5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:55:53 compute-0 podman[293775]: 2025-11-24 18:55:53.756119474 +0000 UTC m=+0.126182436 container start 7395125d717f88ca8e4999c5725265641ee7ef2dc47310d21a6bfca5a5033c5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_sinoussi, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 24 18:55:53 compute-0 podman[293775]: 2025-11-24 18:55:53.758879602 +0000 UTC m=+0.128942594 container attach 7395125d717f88ca8e4999c5725265641ee7ef2dc47310d21a6bfca5a5033c5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 18:55:54 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1241: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]: {
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:     "0": [
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:         {
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:             "devices": [
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:                 "/dev/loop3"
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:             ],
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:             "lv_name": "ceph_lv0",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:             "lv_size": "21470642176",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:             "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:             "name": "ceph_lv0",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:             "tags": {
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:                 "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:                 "ceph.cluster_name": "ceph",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:                 "ceph.crush_device_class": "",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:                 "ceph.encrypted": "0",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:                 "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:                 "ceph.osd_id": "0",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:                 "ceph.type": "block",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:                 "ceph.vdo": "0"
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:             },
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:             "type": "block",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:             "vg_name": "ceph_vg0"
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:         }
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:     ],
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:     "1": [
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:         {
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:             "devices": [
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:                 "/dev/loop4"
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:             ],
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:             "lv_name": "ceph_lv1",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:             "lv_size": "21470642176",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:             "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:             "name": "ceph_lv1",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:             "tags": {
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:                 "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:                 "ceph.cluster_name": "ceph",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:                 "ceph.crush_device_class": "",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:                 "ceph.encrypted": "0",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:                 "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:                 "ceph.osd_id": "1",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:                 "ceph.type": "block",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:                 "ceph.vdo": "0"
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:             },
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:             "type": "block",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:             "vg_name": "ceph_vg1"
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:         }
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:     ],
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:     "2": [
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:         {
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:             "devices": [
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:                 "/dev/loop5"
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:             ],
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:             "lv_name": "ceph_lv2",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:             "lv_size": "21470642176",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:             "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:             "name": "ceph_lv2",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:             "tags": {
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:                 "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:                 "ceph.cluster_name": "ceph",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:                 "ceph.crush_device_class": "",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:                 "ceph.encrypted": "0",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:                 "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:                 "ceph.osd_id": "2",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:                 "ceph.type": "block",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:                 "ceph.vdo": "0"
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:             },
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:             "type": "block",
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:             "vg_name": "ceph_vg2"
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:         }
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]:     ]
Nov 24 18:55:54 compute-0 relaxed_sinoussi[293791]: }
Nov 24 18:55:54 compute-0 systemd[1]: libpod-7395125d717f88ca8e4999c5725265641ee7ef2dc47310d21a6bfca5a5033c5b.scope: Deactivated successfully.
Nov 24 18:55:54 compute-0 podman[293775]: 2025-11-24 18:55:54.499659014 +0000 UTC m=+0.869722016 container died 7395125d717f88ca8e4999c5725265641ee7ef2dc47310d21a6bfca5a5033c5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_sinoussi, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:55:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-b88106b21d16ecbbd28e6435fd076af3ce391a1a7e725d9b822248d3998cf7c5-merged.mount: Deactivated successfully.
Nov 24 18:55:54 compute-0 podman[293775]: 2025-11-24 18:55:54.558581695 +0000 UTC m=+0.928644657 container remove 7395125d717f88ca8e4999c5725265641ee7ef2dc47310d21a6bfca5a5033c5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_sinoussi, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:55:54 compute-0 systemd[1]: libpod-conmon-7395125d717f88ca8e4999c5725265641ee7ef2dc47310d21a6bfca5a5033c5b.scope: Deactivated successfully.
Nov 24 18:55:54 compute-0 sudo[293670]: pam_unix(sudo:session): session closed for user root
Nov 24 18:55:54 compute-0 sudo[293812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:55:54 compute-0 sudo[293812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:55:54 compute-0 sudo[293812]: pam_unix(sudo:session): session closed for user root
Nov 24 18:55:54 compute-0 sudo[293837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:55:54 compute-0 sudo[293837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:55:54 compute-0 sudo[293837]: pam_unix(sudo:session): session closed for user root
Nov 24 18:55:54 compute-0 sudo[293862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:55:54 compute-0 sudo[293862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:55:54 compute-0 sudo[293862]: pam_unix(sudo:session): session closed for user root
Nov 24 18:55:54 compute-0 sudo[293887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- raw list --format json
Nov 24 18:55:54 compute-0 sudo[293887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:55:55 compute-0 podman[293952]: 2025-11-24 18:55:55.174059584 +0000 UTC m=+0.038316268 container create 910a13c29801c6e734ac29546efd895f80ce4fa9613eaab0fc88119eb166577a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_vaughan, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:55:55 compute-0 systemd[1]: Started libpod-conmon-910a13c29801c6e734ac29546efd895f80ce4fa9613eaab0fc88119eb166577a.scope.
Nov 24 18:55:55 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:55:55 compute-0 podman[293952]: 2025-11-24 18:55:55.242995959 +0000 UTC m=+0.107252653 container init 910a13c29801c6e734ac29546efd895f80ce4fa9613eaab0fc88119eb166577a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 18:55:55 compute-0 podman[293952]: 2025-11-24 18:55:55.248646627 +0000 UTC m=+0.112903301 container start 910a13c29801c6e734ac29546efd895f80ce4fa9613eaab0fc88119eb166577a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_vaughan, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 18:55:55 compute-0 podman[293952]: 2025-11-24 18:55:55.15632248 +0000 UTC m=+0.020579184 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:55:55 compute-0 podman[293952]: 2025-11-24 18:55:55.251290092 +0000 UTC m=+0.115546776 container attach 910a13c29801c6e734ac29546efd895f80ce4fa9613eaab0fc88119eb166577a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:55:55 compute-0 practical_vaughan[293968]: 167 167
Nov 24 18:55:55 compute-0 systemd[1]: libpod-910a13c29801c6e734ac29546efd895f80ce4fa9613eaab0fc88119eb166577a.scope: Deactivated successfully.
Nov 24 18:55:55 compute-0 podman[293952]: 2025-11-24 18:55:55.253316521 +0000 UTC m=+0.117573205 container died 910a13c29801c6e734ac29546efd895f80ce4fa9613eaab0fc88119eb166577a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 18:55:55 compute-0 ceph-mon[74927]: pgmap v1241: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-4987606d5cb081bf442c52b8586754bb6308ead8bf033018b72346c613abd7c5-merged.mount: Deactivated successfully.
Nov 24 18:55:55 compute-0 podman[293952]: 2025-11-24 18:55:55.339249363 +0000 UTC m=+0.203506037 container remove 910a13c29801c6e734ac29546efd895f80ce4fa9613eaab0fc88119eb166577a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_vaughan, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 24 18:55:55 compute-0 systemd[1]: libpod-conmon-910a13c29801c6e734ac29546efd895f80ce4fa9613eaab0fc88119eb166577a.scope: Deactivated successfully.
Nov 24 18:55:55 compute-0 podman[293994]: 2025-11-24 18:55:55.504527424 +0000 UTC m=+0.048234851 container create cfd1028f28380da0400705cc27e3fb2e43da58d42170f3568841c9f7c57e16d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 18:55:55 compute-0 systemd[1]: Started libpod-conmon-cfd1028f28380da0400705cc27e3fb2e43da58d42170f3568841c9f7c57e16d5.scope.
Nov 24 18:55:55 compute-0 podman[293994]: 2025-11-24 18:55:55.479099442 +0000 UTC m=+0.022806779 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:55:55 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:55:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdf2804f16c26d08481c99a0775a6c5d92f516fb233480824dc65f7c02451bcb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:55:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdf2804f16c26d08481c99a0775a6c5d92f516fb233480824dc65f7c02451bcb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:55:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdf2804f16c26d08481c99a0775a6c5d92f516fb233480824dc65f7c02451bcb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:55:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdf2804f16c26d08481c99a0775a6c5d92f516fb233480824dc65f7c02451bcb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:55:55 compute-0 podman[293994]: 2025-11-24 18:55:55.587479842 +0000 UTC m=+0.131187089 container init cfd1028f28380da0400705cc27e3fb2e43da58d42170f3568841c9f7c57e16d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 18:55:55 compute-0 podman[293994]: 2025-11-24 18:55:55.59927278 +0000 UTC m=+0.142980027 container start cfd1028f28380da0400705cc27e3fb2e43da58d42170f3568841c9f7c57e16d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 18:55:55 compute-0 podman[293994]: 2025-11-24 18:55:55.603613257 +0000 UTC m=+0.147320524 container attach cfd1028f28380da0400705cc27e3fb2e43da58d42170f3568841c9f7c57e16d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:55:56 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1242: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:56 compute-0 gallant_ride[294010]: {
Nov 24 18:55:56 compute-0 gallant_ride[294010]:     "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 18:55:56 compute-0 gallant_ride[294010]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:55:56 compute-0 gallant_ride[294010]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 18:55:56 compute-0 gallant_ride[294010]:         "osd_id": 0,
Nov 24 18:55:56 compute-0 gallant_ride[294010]:         "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:55:56 compute-0 gallant_ride[294010]:         "type": "bluestore"
Nov 24 18:55:56 compute-0 gallant_ride[294010]:     },
Nov 24 18:55:56 compute-0 gallant_ride[294010]:     "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 18:55:56 compute-0 gallant_ride[294010]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:55:56 compute-0 gallant_ride[294010]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 18:55:56 compute-0 gallant_ride[294010]:         "osd_id": 1,
Nov 24 18:55:56 compute-0 gallant_ride[294010]:         "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:55:56 compute-0 gallant_ride[294010]:         "type": "bluestore"
Nov 24 18:55:56 compute-0 gallant_ride[294010]:     },
Nov 24 18:55:56 compute-0 gallant_ride[294010]:     "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 18:55:56 compute-0 gallant_ride[294010]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:55:56 compute-0 gallant_ride[294010]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 18:55:56 compute-0 gallant_ride[294010]:         "osd_id": 2,
Nov 24 18:55:56 compute-0 gallant_ride[294010]:         "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:55:56 compute-0 gallant_ride[294010]:         "type": "bluestore"
Nov 24 18:55:56 compute-0 gallant_ride[294010]:     }
Nov 24 18:55:56 compute-0 gallant_ride[294010]: }
Nov 24 18:55:56 compute-0 systemd[1]: libpod-cfd1028f28380da0400705cc27e3fb2e43da58d42170f3568841c9f7c57e16d5.scope: Deactivated successfully.
Nov 24 18:55:56 compute-0 podman[293994]: 2025-11-24 18:55:56.585521155 +0000 UTC m=+1.129228412 container died cfd1028f28380da0400705cc27e3fb2e43da58d42170f3568841c9f7c57e16d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:55:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-bdf2804f16c26d08481c99a0775a6c5d92f516fb233480824dc65f7c02451bcb-merged.mount: Deactivated successfully.
Nov 24 18:55:56 compute-0 podman[293994]: 2025-11-24 18:55:56.63359796 +0000 UTC m=+1.177305207 container remove cfd1028f28380da0400705cc27e3fb2e43da58d42170f3568841c9f7c57e16d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 18:55:56 compute-0 systemd[1]: libpod-conmon-cfd1028f28380da0400705cc27e3fb2e43da58d42170f3568841c9f7c57e16d5.scope: Deactivated successfully.
Nov 24 18:55:56 compute-0 sudo[293887]: pam_unix(sudo:session): session closed for user root
Nov 24 18:55:56 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:55:56 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:55:56 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:55:56 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:55:56 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 8ab42336-6663-4f64-aa09-65e1d1f03ce0 does not exist
Nov 24 18:55:56 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev af604f89-38b0-4ef0-b81d-132c4a982b2e does not exist
Nov 24 18:55:56 compute-0 sudo[294058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:55:56 compute-0 sudo[294058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:55:56 compute-0 sudo[294058]: pam_unix(sudo:session): session closed for user root
Nov 24 18:55:56 compute-0 sudo[294083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:55:56 compute-0 sudo[294083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:55:56 compute-0 sudo[294083]: pam_unix(sudo:session): session closed for user root
Nov 24 18:55:57 compute-0 ceph-mon[74927]: pgmap v1242: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:57 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:55:57 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:55:58 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1243: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:55:58 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:55:59 compute-0 ceph-mon[74927]: pgmap v1243: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:00 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1244: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:01 compute-0 ceph-mon[74927]: pgmap v1244: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:02 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1245: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:03 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:56:03 compute-0 ceph-mon[74927]: pgmap v1245: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:04 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1246: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:56:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:56:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:56:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:56:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:56:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:56:05 compute-0 ceph-mon[74927]: pgmap v1246: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:06 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1247: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:07 compute-0 ceph-mon[74927]: pgmap v1247: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:08 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1248: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:08 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:56:09 compute-0 ceph-mon[74927]: pgmap v1248: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:10 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1249: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:11 compute-0 ceph-mon[74927]: pgmap v1249: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:12 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1250: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:13 compute-0 sshd-session[294108]: Connection closed by authenticating user root 185.156.73.233 port 43538 [preauth]
Nov 24 18:56:13 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:56:13 compute-0 ceph-mon[74927]: pgmap v1250: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:14 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1251: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:14 compute-0 podman[294110]: 2025-11-24 18:56:14.963638545 +0000 UTC m=+0.059401624 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 18:56:14 compute-0 podman[294112]: 2025-11-24 18:56:14.986615857 +0000 UTC m=+0.075062207 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Nov 24 18:56:14 compute-0 podman[294111]: 2025-11-24 18:56:14.993583677 +0000 UTC m=+0.088923665 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 18:56:15 compute-0 ceph-mon[74927]: pgmap v1251: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:16 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1252: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:16 compute-0 nova_compute[270693]: 2025-11-24 18:56:16.524 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:56:16 compute-0 nova_compute[270693]: 2025-11-24 18:56:16.528 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:56:16 compute-0 nova_compute[270693]: 2025-11-24 18:56:16.528 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 18:56:16 compute-0 nova_compute[270693]: 2025-11-24 18:56:16.528 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 18:56:16 compute-0 nova_compute[270693]: 2025-11-24 18:56:16.543 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 18:56:16 compute-0 nova_compute[270693]: 2025-11-24 18:56:16.543 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:56:16 compute-0 nova_compute[270693]: 2025-11-24 18:56:16.597 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:56:16 compute-0 nova_compute[270693]: 2025-11-24 18:56:16.598 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:56:16 compute-0 nova_compute[270693]: 2025-11-24 18:56:16.598 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:56:16 compute-0 nova_compute[270693]: 2025-11-24 18:56:16.598 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 18:56:16 compute-0 nova_compute[270693]: 2025-11-24 18:56:16.599 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:56:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 18:56:17 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1409027887' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:56:17 compute-0 nova_compute[270693]: 2025-11-24 18:56:17.083 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:56:17 compute-0 nova_compute[270693]: 2025-11-24 18:56:17.227 270697 WARNING nova.virt.libvirt.driver [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 18:56:17 compute-0 nova_compute[270693]: 2025-11-24 18:56:17.228 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5000MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 18:56:17 compute-0 nova_compute[270693]: 2025-11-24 18:56:17.228 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:56:17 compute-0 nova_compute[270693]: 2025-11-24 18:56:17.228 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:56:17 compute-0 nova_compute[270693]: 2025-11-24 18:56:17.330 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 18:56:17 compute-0 nova_compute[270693]: 2025-11-24 18:56:17.330 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 18:56:17 compute-0 nova_compute[270693]: 2025-11-24 18:56:17.348 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:56:17 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 18:56:17 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2244117392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:56:17 compute-0 nova_compute[270693]: 2025-11-24 18:56:17.727 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.378s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:56:17 compute-0 nova_compute[270693]: 2025-11-24 18:56:17.734 270697 DEBUG nova.compute.provider_tree [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed in ProviderTree for provider: d1cce7ec-de83-4810-91f8-1852891da8a6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 18:56:17 compute-0 nova_compute[270693]: 2025-11-24 18:56:17.791 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed for provider d1cce7ec-de83-4810-91f8-1852891da8a6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 18:56:17 compute-0 nova_compute[270693]: 2025-11-24 18:56:17.794 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 18:56:17 compute-0 nova_compute[270693]: 2025-11-24 18:56:17.795 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:56:17 compute-0 ceph-mon[74927]: pgmap v1252: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:17 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1409027887' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:56:17 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2244117392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:56:18 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1253: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:18 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:56:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 18:56:19 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3111280441' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 18:56:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 18:56:19 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3111280441' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 18:56:19 compute-0 nova_compute[270693]: 2025-11-24 18:56:19.781 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:56:19 compute-0 nova_compute[270693]: 2025-11-24 18:56:19.781 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 18:56:19 compute-0 ceph-mon[74927]: pgmap v1253: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:19 compute-0 ceph-mon[74927]: from='client.? 192.168.122.10:0/3111280441' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 18:56:19 compute-0 ceph-mon[74927]: from='client.? 192.168.122.10:0/3111280441' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 18:56:20 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1254: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:20 compute-0 nova_compute[270693]: 2025-11-24 18:56:20.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:56:20 compute-0 nova_compute[270693]: 2025-11-24 18:56:20.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:56:21 compute-0 nova_compute[270693]: 2025-11-24 18:56:21.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:56:21 compute-0 ceph-mon[74927]: pgmap v1254: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:22 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1255: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:22 compute-0 nova_compute[270693]: 2025-11-24 18:56:22.528 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:56:22 compute-0 nova_compute[270693]: 2025-11-24 18:56:22.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:56:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:56:22.753 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:56:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:56:22.753 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:56:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:56:22.753 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:56:23 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:56:23 compute-0 ceph-mon[74927]: pgmap v1255: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:24 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1256: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:25 compute-0 ceph-mon[74927]: pgmap v1256: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:26 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1257: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:27 compute-0 ceph-mon[74927]: pgmap v1257: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:28 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1258: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:28 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:56:29 compute-0 ceph-mon[74927]: pgmap v1258: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:30 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1259: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:31 compute-0 ceph-mon[74927]: pgmap v1259: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:32 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1260: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:33 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:56:33 compute-0 ceph-mon[74927]: pgmap v1260: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:34 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1261: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:56:34
Nov 24 18:56:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:56:34 compute-0 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 18:56:34 compute-0 ceph-mgr[75218]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'vms', 'volumes', '.rgw.root', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'backups', 'default.rgw.log']
Nov 24 18:56:34 compute-0 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 18:56:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:56:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:56:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:56:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:56:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:56:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:56:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:56:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:56:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:56:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:56:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:56:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:56:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:56:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:56:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:56:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:56:35 compute-0 ceph-mon[74927]: pgmap v1261: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:36 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1262: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:37 compute-0 ceph-mon[74927]: pgmap v1262: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:38 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1263: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:38 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:56:39 compute-0 ceph-mon[74927]: pgmap v1263: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:40 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1264: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:41 compute-0 ceph-mon[74927]: pgmap v1264: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:42 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1265: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:43 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:56:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:56:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:56:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:56:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:56:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 18:56:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:56:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 18:56:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:56:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:56:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:56:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 18:56:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:56:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 18:56:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:56:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:56:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:56:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 18:56:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:56:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 18:56:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:56:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:56:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:56:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 18:56:43 compute-0 ceph-mon[74927]: pgmap v1265: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:44 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1266: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:45 compute-0 ceph-mon[74927]: pgmap v1266: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:45 compute-0 podman[294214]: 2025-11-24 18:56:45.980635152 +0000 UTC m=+0.061085825 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 24 18:56:45 compute-0 podman[294216]: 2025-11-24 18:56:45.995060235 +0000 UTC m=+0.069537951 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 18:56:46 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1267: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:46 compute-0 podman[294215]: 2025-11-24 18:56:46.086953622 +0000 UTC m=+0.167265841 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 18:56:47 compute-0 ceph-mon[74927]: pgmap v1267: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:48 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1268: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:48 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:56:49 compute-0 ceph-mon[74927]: pgmap v1268: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:50 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1269: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:51 compute-0 ceph-mon[74927]: pgmap v1269: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:52 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1270: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:53 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:56:53 compute-0 ceph-mon[74927]: pgmap v1270: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:54 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1271: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:55 compute-0 ceph-mon[74927]: pgmap v1271: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:56 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1272: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:56 compute-0 sudo[294275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:56:56 compute-0 sudo[294275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:56:56 compute-0 sudo[294275]: pam_unix(sudo:session): session closed for user root
Nov 24 18:56:56 compute-0 sudo[294300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:56:56 compute-0 sudo[294300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:56:56 compute-0 sudo[294300]: pam_unix(sudo:session): session closed for user root
Nov 24 18:56:57 compute-0 sudo[294325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:56:57 compute-0 sudo[294325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:56:57 compute-0 sudo[294325]: pam_unix(sudo:session): session closed for user root
Nov 24 18:56:57 compute-0 sudo[294350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 18:56:57 compute-0 sudo[294350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:56:57 compute-0 sudo[294350]: pam_unix(sudo:session): session closed for user root
Nov 24 18:56:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:56:57 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:56:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:56:57 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:56:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:56:57 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:56:57 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 26caf26c-5d42-4711-b260-903c4385220f does not exist
Nov 24 18:56:57 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 5b15f244-b496-4d2b-8d13-75ccca904f55 does not exist
Nov 24 18:56:57 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 55a1ac89-de69-4f8e-8f04-b63cd3e5fd89 does not exist
Nov 24 18:56:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 18:56:57 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:56:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 18:56:57 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:56:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:56:57 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:56:57 compute-0 sudo[294406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:56:57 compute-0 sudo[294406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:56:57 compute-0 sudo[294406]: pam_unix(sudo:session): session closed for user root
Nov 24 18:56:57 compute-0 sudo[294431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:56:57 compute-0 sudo[294431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:56:57 compute-0 sudo[294431]: pam_unix(sudo:session): session closed for user root
Nov 24 18:56:57 compute-0 sudo[294456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:56:57 compute-0 sudo[294456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:56:57 compute-0 sudo[294456]: pam_unix(sudo:session): session closed for user root
Nov 24 18:56:57 compute-0 sudo[294481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 18:56:57 compute-0 sudo[294481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:56:57 compute-0 ceph-mon[74927]: pgmap v1272: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:57 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:56:57 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:56:57 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:56:57 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:56:57 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:56:57 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:56:58 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1273: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:56:58 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:56:58 compute-0 podman[294543]: 2025-11-24 18:56:58.293453205 +0000 UTC m=+0.054612206 container create 1806a74497dd119a2b94f25f008829cdd0d05f5af12b04b7b9ef79a33ef03cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 18:56:58 compute-0 systemd[1]: Started libpod-conmon-1806a74497dd119a2b94f25f008829cdd0d05f5af12b04b7b9ef79a33ef03cd6.scope.
Nov 24 18:56:58 compute-0 podman[294543]: 2025-11-24 18:56:58.263565504 +0000 UTC m=+0.024724555 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:56:58 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:56:58 compute-0 podman[294543]: 2025-11-24 18:56:58.392296702 +0000 UTC m=+0.153455693 container init 1806a74497dd119a2b94f25f008829cdd0d05f5af12b04b7b9ef79a33ef03cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 24 18:56:58 compute-0 podman[294543]: 2025-11-24 18:56:58.402327437 +0000 UTC m=+0.163486438 container start 1806a74497dd119a2b94f25f008829cdd0d05f5af12b04b7b9ef79a33ef03cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Nov 24 18:56:58 compute-0 podman[294543]: 2025-11-24 18:56:58.406298544 +0000 UTC m=+0.167457515 container attach 1806a74497dd119a2b94f25f008829cdd0d05f5af12b04b7b9ef79a33ef03cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ptolemy, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 24 18:56:58 compute-0 youthful_ptolemy[294559]: 167 167
Nov 24 18:56:58 compute-0 systemd[1]: libpod-1806a74497dd119a2b94f25f008829cdd0d05f5af12b04b7b9ef79a33ef03cd6.scope: Deactivated successfully.
Nov 24 18:56:58 compute-0 conmon[294559]: conmon 1806a74497dd119a2b94 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1806a74497dd119a2b94f25f008829cdd0d05f5af12b04b7b9ef79a33ef03cd6.scope/container/memory.events
Nov 24 18:56:58 compute-0 podman[294543]: 2025-11-24 18:56:58.410071156 +0000 UTC m=+0.171230197 container died 1806a74497dd119a2b94f25f008829cdd0d05f5af12b04b7b9ef79a33ef03cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ptolemy, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:56:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb94761c63dbbb3ab61b3c97da7d1fe794e755386a827e59372d06a26eae375e-merged.mount: Deactivated successfully.
Nov 24 18:56:58 compute-0 podman[294543]: 2025-11-24 18:56:58.470599146 +0000 UTC m=+0.231758127 container remove 1806a74497dd119a2b94f25f008829cdd0d05f5af12b04b7b9ef79a33ef03cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ptolemy, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:56:58 compute-0 systemd[1]: libpod-conmon-1806a74497dd119a2b94f25f008829cdd0d05f5af12b04b7b9ef79a33ef03cd6.scope: Deactivated successfully.
Nov 24 18:56:58 compute-0 podman[294583]: 2025-11-24 18:56:58.676140362 +0000 UTC m=+0.049367948 container create 1ff1215dd890e92ef3e059459152bca0287414f04edbbd0ca4dea91108eb6ba0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:56:58 compute-0 systemd[1]: Started libpod-conmon-1ff1215dd890e92ef3e059459152bca0287414f04edbbd0ca4dea91108eb6ba0.scope.
Nov 24 18:56:58 compute-0 podman[294583]: 2025-11-24 18:56:58.649039759 +0000 UTC m=+0.022267325 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:56:58 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:56:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed8de8f1ab2ed77fe62e72ac136dc72011fb40db054f6de5139bbb8c7244724d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:56:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed8de8f1ab2ed77fe62e72ac136dc72011fb40db054f6de5139bbb8c7244724d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:56:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed8de8f1ab2ed77fe62e72ac136dc72011fb40db054f6de5139bbb8c7244724d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:56:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed8de8f1ab2ed77fe62e72ac136dc72011fb40db054f6de5139bbb8c7244724d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:56:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed8de8f1ab2ed77fe62e72ac136dc72011fb40db054f6de5139bbb8c7244724d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:56:58 compute-0 podman[294583]: 2025-11-24 18:56:58.764597645 +0000 UTC m=+0.137825211 container init 1ff1215dd890e92ef3e059459152bca0287414f04edbbd0ca4dea91108eb6ba0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:56:58 compute-0 podman[294583]: 2025-11-24 18:56:58.778005302 +0000 UTC m=+0.151232848 container start 1ff1215dd890e92ef3e059459152bca0287414f04edbbd0ca4dea91108eb6ba0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ishizaka, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:56:58 compute-0 podman[294583]: 2025-11-24 18:56:58.781318403 +0000 UTC m=+0.154546039 container attach 1ff1215dd890e92ef3e059459152bca0287414f04edbbd0ca4dea91108eb6ba0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 24 18:56:59 compute-0 strange_ishizaka[294600]: --> passed data devices: 0 physical, 3 LVM
Nov 24 18:56:59 compute-0 strange_ishizaka[294600]: --> relative data size: 1.0
Nov 24 18:56:59 compute-0 strange_ishizaka[294600]: --> All data devices are unavailable
Nov 24 18:56:59 compute-0 systemd[1]: libpod-1ff1215dd890e92ef3e059459152bca0287414f04edbbd0ca4dea91108eb6ba0.scope: Deactivated successfully.
Nov 24 18:56:59 compute-0 podman[294583]: 2025-11-24 18:56:59.86664304 +0000 UTC m=+1.239870596 container died 1ff1215dd890e92ef3e059459152bca0287414f04edbbd0ca4dea91108eb6ba0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ishizaka, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:56:59 compute-0 systemd[1]: libpod-1ff1215dd890e92ef3e059459152bca0287414f04edbbd0ca4dea91108eb6ba0.scope: Consumed 1.041s CPU time.
Nov 24 18:56:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed8de8f1ab2ed77fe62e72ac136dc72011fb40db054f6de5139bbb8c7244724d-merged.mount: Deactivated successfully.
Nov 24 18:56:59 compute-0 podman[294583]: 2025-11-24 18:56:59.931482236 +0000 UTC m=+1.304709782 container remove 1ff1215dd890e92ef3e059459152bca0287414f04edbbd0ca4dea91108eb6ba0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:56:59 compute-0 systemd[1]: libpod-conmon-1ff1215dd890e92ef3e059459152bca0287414f04edbbd0ca4dea91108eb6ba0.scope: Deactivated successfully.
Nov 24 18:56:59 compute-0 sudo[294481]: pam_unix(sudo:session): session closed for user root
Nov 24 18:56:59 compute-0 ceph-mon[74927]: pgmap v1273: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:00 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1274: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:00 compute-0 sudo[294643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:57:00 compute-0 sudo[294643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:57:00 compute-0 sudo[294643]: pam_unix(sudo:session): session closed for user root
Nov 24 18:57:00 compute-0 sudo[294668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:57:00 compute-0 sudo[294668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:57:00 compute-0 sudo[294668]: pam_unix(sudo:session): session closed for user root
Nov 24 18:57:00 compute-0 sudo[294693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:57:00 compute-0 sudo[294693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:57:00 compute-0 sudo[294693]: pam_unix(sudo:session): session closed for user root
Nov 24 18:57:00 compute-0 sudo[294718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- lvm list --format json
Nov 24 18:57:00 compute-0 sudo[294718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:57:00 compute-0 podman[294784]: 2025-11-24 18:57:00.567822555 +0000 UTC m=+0.050722401 container create 9a5c97dd8474994ad1dd68f557804308879b512b647348cedcb53f006a221c72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 18:57:00 compute-0 systemd[1]: Started libpod-conmon-9a5c97dd8474994ad1dd68f557804308879b512b647348cedcb53f006a221c72.scope.
Nov 24 18:57:00 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:57:00 compute-0 podman[294784]: 2025-11-24 18:57:00.541318327 +0000 UTC m=+0.024218213 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:57:00 compute-0 podman[294784]: 2025-11-24 18:57:00.986132893 +0000 UTC m=+0.469032799 container init 9a5c97dd8474994ad1dd68f557804308879b512b647348cedcb53f006a221c72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_elgamal, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:57:00 compute-0 podman[294784]: 2025-11-24 18:57:00.992178071 +0000 UTC m=+0.475077897 container start 9a5c97dd8474994ad1dd68f557804308879b512b647348cedcb53f006a221c72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:57:00 compute-0 podman[294784]: 2025-11-24 18:57:00.996096316 +0000 UTC m=+0.478996182 container attach 9a5c97dd8474994ad1dd68f557804308879b512b647348cedcb53f006a221c72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 24 18:57:00 compute-0 objective_elgamal[294800]: 167 167
Nov 24 18:57:00 compute-0 systemd[1]: libpod-9a5c97dd8474994ad1dd68f557804308879b512b647348cedcb53f006a221c72.scope: Deactivated successfully.
Nov 24 18:57:00 compute-0 podman[294784]: 2025-11-24 18:57:00.998254569 +0000 UTC m=+0.481154405 container died 9a5c97dd8474994ad1dd68f557804308879b512b647348cedcb53f006a221c72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_elgamal, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 18:57:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd185a1a4c8602d79f09a797d66e67616dbcf431b577cffa42e311c1c1e01e98-merged.mount: Deactivated successfully.
Nov 24 18:57:01 compute-0 podman[294784]: 2025-11-24 18:57:01.038614846 +0000 UTC m=+0.521514692 container remove 9a5c97dd8474994ad1dd68f557804308879b512b647348cedcb53f006a221c72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_elgamal, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 24 18:57:01 compute-0 systemd[1]: libpod-conmon-9a5c97dd8474994ad1dd68f557804308879b512b647348cedcb53f006a221c72.scope: Deactivated successfully.
Nov 24 18:57:01 compute-0 podman[294824]: 2025-11-24 18:57:01.230782435 +0000 UTC m=+0.045744220 container create 4948e9077cd988637ee15a4227275663b0d194fe7d1154b9578dd36090913a67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_perlman, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:57:01 compute-0 systemd[1]: Started libpod-conmon-4948e9077cd988637ee15a4227275663b0d194fe7d1154b9578dd36090913a67.scope.
Nov 24 18:57:01 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:57:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41650be5d281706ec61ee3667db3785801f2c9118c500de8584d4a32745a0038/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:57:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41650be5d281706ec61ee3667db3785801f2c9118c500de8584d4a32745a0038/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:57:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41650be5d281706ec61ee3667db3785801f2c9118c500de8584d4a32745a0038/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:57:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41650be5d281706ec61ee3667db3785801f2c9118c500de8584d4a32745a0038/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:57:01 compute-0 podman[294824]: 2025-11-24 18:57:01.212456567 +0000 UTC m=+0.027418362 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:57:01 compute-0 podman[294824]: 2025-11-24 18:57:01.318143991 +0000 UTC m=+0.133105776 container init 4948e9077cd988637ee15a4227275663b0d194fe7d1154b9578dd36090913a67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_perlman, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:57:01 compute-0 podman[294824]: 2025-11-24 18:57:01.335398713 +0000 UTC m=+0.150360498 container start 4948e9077cd988637ee15a4227275663b0d194fe7d1154b9578dd36090913a67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_perlman, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 24 18:57:01 compute-0 podman[294824]: 2025-11-24 18:57:01.340039136 +0000 UTC m=+0.155000921 container attach 4948e9077cd988637ee15a4227275663b0d194fe7d1154b9578dd36090913a67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:57:01 compute-0 ceph-mon[74927]: pgmap v1274: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:02 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1275: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:02 compute-0 magical_perlman[294840]: {
Nov 24 18:57:02 compute-0 magical_perlman[294840]:     "0": [
Nov 24 18:57:02 compute-0 magical_perlman[294840]:         {
Nov 24 18:57:02 compute-0 magical_perlman[294840]:             "devices": [
Nov 24 18:57:02 compute-0 magical_perlman[294840]:                 "/dev/loop3"
Nov 24 18:57:02 compute-0 magical_perlman[294840]:             ],
Nov 24 18:57:02 compute-0 magical_perlman[294840]:             "lv_name": "ceph_lv0",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:             "lv_size": "21470642176",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:             "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:             "name": "ceph_lv0",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:             "tags": {
Nov 24 18:57:02 compute-0 magical_perlman[294840]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:                 "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:                 "ceph.cluster_name": "ceph",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:                 "ceph.crush_device_class": "",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:                 "ceph.encrypted": "0",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:                 "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:                 "ceph.osd_id": "0",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:                 "ceph.type": "block",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:                 "ceph.vdo": "0"
Nov 24 18:57:02 compute-0 magical_perlman[294840]:             },
Nov 24 18:57:02 compute-0 magical_perlman[294840]:             "type": "block",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:             "vg_name": "ceph_vg0"
Nov 24 18:57:02 compute-0 magical_perlman[294840]:         }
Nov 24 18:57:02 compute-0 magical_perlman[294840]:     ],
Nov 24 18:57:02 compute-0 magical_perlman[294840]:     "1": [
Nov 24 18:57:02 compute-0 magical_perlman[294840]:         {
Nov 24 18:57:02 compute-0 magical_perlman[294840]:             "devices": [
Nov 24 18:57:02 compute-0 magical_perlman[294840]:                 "/dev/loop4"
Nov 24 18:57:02 compute-0 magical_perlman[294840]:             ],
Nov 24 18:57:02 compute-0 magical_perlman[294840]:             "lv_name": "ceph_lv1",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:             "lv_size": "21470642176",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:             "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:             "name": "ceph_lv1",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:             "tags": {
Nov 24 18:57:02 compute-0 magical_perlman[294840]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:                 "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:                 "ceph.cluster_name": "ceph",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:                 "ceph.crush_device_class": "",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:                 "ceph.encrypted": "0",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:                 "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:                 "ceph.osd_id": "1",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:                 "ceph.type": "block",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:                 "ceph.vdo": "0"
Nov 24 18:57:02 compute-0 magical_perlman[294840]:             },
Nov 24 18:57:02 compute-0 magical_perlman[294840]:             "type": "block",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:             "vg_name": "ceph_vg1"
Nov 24 18:57:02 compute-0 magical_perlman[294840]:         }
Nov 24 18:57:02 compute-0 magical_perlman[294840]:     ],
Nov 24 18:57:02 compute-0 magical_perlman[294840]:     "2": [
Nov 24 18:57:02 compute-0 magical_perlman[294840]:         {
Nov 24 18:57:02 compute-0 magical_perlman[294840]:             "devices": [
Nov 24 18:57:02 compute-0 magical_perlman[294840]:                 "/dev/loop5"
Nov 24 18:57:02 compute-0 magical_perlman[294840]:             ],
Nov 24 18:57:02 compute-0 magical_perlman[294840]:             "lv_name": "ceph_lv2",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:             "lv_size": "21470642176",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:             "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:             "name": "ceph_lv2",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:             "tags": {
Nov 24 18:57:02 compute-0 magical_perlman[294840]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:                 "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:                 "ceph.cluster_name": "ceph",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:                 "ceph.crush_device_class": "",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:                 "ceph.encrypted": "0",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:                 "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:                 "ceph.osd_id": "2",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:                 "ceph.type": "block",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:                 "ceph.vdo": "0"
Nov 24 18:57:02 compute-0 magical_perlman[294840]:             },
Nov 24 18:57:02 compute-0 magical_perlman[294840]:             "type": "block",
Nov 24 18:57:02 compute-0 magical_perlman[294840]:             "vg_name": "ceph_vg2"
Nov 24 18:57:02 compute-0 magical_perlman[294840]:         }
Nov 24 18:57:02 compute-0 magical_perlman[294840]:     ]
Nov 24 18:57:02 compute-0 magical_perlman[294840]: }
Nov 24 18:57:02 compute-0 systemd[1]: libpod-4948e9077cd988637ee15a4227275663b0d194fe7d1154b9578dd36090913a67.scope: Deactivated successfully.
Nov 24 18:57:02 compute-0 podman[294824]: 2025-11-24 18:57:02.10379965 +0000 UTC m=+0.918761465 container died 4948e9077cd988637ee15a4227275663b0d194fe7d1154b9578dd36090913a67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_perlman, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 24 18:57:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-41650be5d281706ec61ee3667db3785801f2c9118c500de8584d4a32745a0038-merged.mount: Deactivated successfully.
Nov 24 18:57:02 compute-0 podman[294824]: 2025-11-24 18:57:02.210945779 +0000 UTC m=+1.025907574 container remove 4948e9077cd988637ee15a4227275663b0d194fe7d1154b9578dd36090913a67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 18:57:02 compute-0 systemd[1]: libpod-conmon-4948e9077cd988637ee15a4227275663b0d194fe7d1154b9578dd36090913a67.scope: Deactivated successfully.
Nov 24 18:57:02 compute-0 sudo[294718]: pam_unix(sudo:session): session closed for user root
Nov 24 18:57:02 compute-0 sudo[294861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:57:02 compute-0 sudo[294861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:57:02 compute-0 sudo[294861]: pam_unix(sudo:session): session closed for user root
Nov 24 18:57:02 compute-0 sudo[294886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:57:02 compute-0 sudo[294886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:57:02 compute-0 sudo[294886]: pam_unix(sudo:session): session closed for user root
Nov 24 18:57:02 compute-0 sudo[294911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:57:02 compute-0 sudo[294911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:57:02 compute-0 sudo[294911]: pam_unix(sudo:session): session closed for user root
Nov 24 18:57:02 compute-0 sudo[294936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- raw list --format json
Nov 24 18:57:02 compute-0 sudo[294936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:57:02 compute-0 ceph-mon[74927]: pgmap v1275: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:03 compute-0 podman[295003]: 2025-11-24 18:57:03.055351876 +0000 UTC m=+0.064847757 container create c103adeab67b54c1dd91d99824c216cbdfdcad1df840de4a1c2258f4ef345e26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_feynman, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:57:03 compute-0 systemd[1]: Started libpod-conmon-c103adeab67b54c1dd91d99824c216cbdfdcad1df840de4a1c2258f4ef345e26.scope.
Nov 24 18:57:03 compute-0 podman[295003]: 2025-11-24 18:57:03.031579764 +0000 UTC m=+0.041075685 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:57:03 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:57:03 compute-0 podman[295003]: 2025-11-24 18:57:03.15943484 +0000 UTC m=+0.168930761 container init c103adeab67b54c1dd91d99824c216cbdfdcad1df840de4a1c2258f4ef345e26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_feynman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 24 18:57:03 compute-0 podman[295003]: 2025-11-24 18:57:03.171844904 +0000 UTC m=+0.181340785 container start c103adeab67b54c1dd91d99824c216cbdfdcad1df840de4a1c2258f4ef345e26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_feynman, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:57:03 compute-0 podman[295003]: 2025-11-24 18:57:03.176246222 +0000 UTC m=+0.185742153 container attach c103adeab67b54c1dd91d99824c216cbdfdcad1df840de4a1c2258f4ef345e26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 18:57:03 compute-0 condescending_feynman[295019]: 167 167
Nov 24 18:57:03 compute-0 systemd[1]: libpod-c103adeab67b54c1dd91d99824c216cbdfdcad1df840de4a1c2258f4ef345e26.scope: Deactivated successfully.
Nov 24 18:57:03 compute-0 conmon[295019]: conmon c103adeab67b54c1dd91 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c103adeab67b54c1dd91d99824c216cbdfdcad1df840de4a1c2258f4ef345e26.scope/container/memory.events
Nov 24 18:57:03 compute-0 podman[295003]: 2025-11-24 18:57:03.182388722 +0000 UTC m=+0.191884623 container died c103adeab67b54c1dd91d99824c216cbdfdcad1df840de4a1c2258f4ef345e26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_feynman, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 18:57:03 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:57:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-982671a702dcdc401bc785a2d4ae74e8a49765446b2170b9e214a092be621479-merged.mount: Deactivated successfully.
Nov 24 18:57:03 compute-0 podman[295003]: 2025-11-24 18:57:03.239697653 +0000 UTC m=+0.249193534 container remove c103adeab67b54c1dd91d99824c216cbdfdcad1df840de4a1c2258f4ef345e26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_feynman, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:57:03 compute-0 systemd[1]: libpod-conmon-c103adeab67b54c1dd91d99824c216cbdfdcad1df840de4a1c2258f4ef345e26.scope: Deactivated successfully.
Nov 24 18:57:03 compute-0 podman[295044]: 2025-11-24 18:57:03.48536154 +0000 UTC m=+0.064538889 container create 1238047ad75a488c9a3313e6c875b7719b1f97a82bc261e948a7ab5e93d6a135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shaw, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 18:57:03 compute-0 systemd[1]: Started libpod-conmon-1238047ad75a488c9a3313e6c875b7719b1f97a82bc261e948a7ab5e93d6a135.scope.
Nov 24 18:57:03 compute-0 podman[295044]: 2025-11-24 18:57:03.454481325 +0000 UTC m=+0.033658744 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:57:03 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:57:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1f353d1a6b0ea5f63c562e2c9920084798b89268deead9cefcc521238cfb8a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:57:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1f353d1a6b0ea5f63c562e2c9920084798b89268deead9cefcc521238cfb8a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:57:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1f353d1a6b0ea5f63c562e2c9920084798b89268deead9cefcc521238cfb8a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:57:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1f353d1a6b0ea5f63c562e2c9920084798b89268deead9cefcc521238cfb8a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:57:03 compute-0 podman[295044]: 2025-11-24 18:57:03.597671416 +0000 UTC m=+0.176848835 container init 1238047ad75a488c9a3313e6c875b7719b1f97a82bc261e948a7ab5e93d6a135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shaw, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:57:03 compute-0 podman[295044]: 2025-11-24 18:57:03.61298954 +0000 UTC m=+0.192166919 container start 1238047ad75a488c9a3313e6c875b7719b1f97a82bc261e948a7ab5e93d6a135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shaw, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:57:03 compute-0 podman[295044]: 2025-11-24 18:57:03.617030409 +0000 UTC m=+0.196207788 container attach 1238047ad75a488c9a3313e6c875b7719b1f97a82bc261e948a7ab5e93d6a135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shaw, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:57:04 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1276: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:04 compute-0 charming_shaw[295060]: {
Nov 24 18:57:04 compute-0 charming_shaw[295060]:     "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 18:57:04 compute-0 charming_shaw[295060]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:57:04 compute-0 charming_shaw[295060]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 18:57:04 compute-0 charming_shaw[295060]:         "osd_id": 0,
Nov 24 18:57:04 compute-0 charming_shaw[295060]:         "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:57:04 compute-0 charming_shaw[295060]:         "type": "bluestore"
Nov 24 18:57:04 compute-0 charming_shaw[295060]:     },
Nov 24 18:57:04 compute-0 charming_shaw[295060]:     "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 18:57:04 compute-0 charming_shaw[295060]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:57:04 compute-0 charming_shaw[295060]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 18:57:04 compute-0 charming_shaw[295060]:         "osd_id": 1,
Nov 24 18:57:04 compute-0 charming_shaw[295060]:         "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:57:04 compute-0 charming_shaw[295060]:         "type": "bluestore"
Nov 24 18:57:04 compute-0 charming_shaw[295060]:     },
Nov 24 18:57:04 compute-0 charming_shaw[295060]:     "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 18:57:04 compute-0 charming_shaw[295060]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:57:04 compute-0 charming_shaw[295060]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 18:57:04 compute-0 charming_shaw[295060]:         "osd_id": 2,
Nov 24 18:57:04 compute-0 charming_shaw[295060]:         "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:57:04 compute-0 charming_shaw[295060]:         "type": "bluestore"
Nov 24 18:57:04 compute-0 charming_shaw[295060]:     }
Nov 24 18:57:04 compute-0 charming_shaw[295060]: }
Nov 24 18:57:04 compute-0 systemd[1]: libpod-1238047ad75a488c9a3313e6c875b7719b1f97a82bc261e948a7ab5e93d6a135.scope: Deactivated successfully.
Nov 24 18:57:04 compute-0 podman[295044]: 2025-11-24 18:57:04.594659293 +0000 UTC m=+1.173836632 container died 1238047ad75a488c9a3313e6c875b7719b1f97a82bc261e948a7ab5e93d6a135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:57:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1f353d1a6b0ea5f63c562e2c9920084798b89268deead9cefcc521238cfb8a1-merged.mount: Deactivated successfully.
Nov 24 18:57:04 compute-0 podman[295044]: 2025-11-24 18:57:04.645168338 +0000 UTC m=+1.224345687 container remove 1238047ad75a488c9a3313e6c875b7719b1f97a82bc261e948a7ab5e93d6a135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shaw, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 18:57:04 compute-0 systemd[1]: libpod-conmon-1238047ad75a488c9a3313e6c875b7719b1f97a82bc261e948a7ab5e93d6a135.scope: Deactivated successfully.
Nov 24 18:57:04 compute-0 sudo[294936]: pam_unix(sudo:session): session closed for user root
Nov 24 18:57:04 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:57:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:57:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:57:04 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:57:04 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:57:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:57:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:57:04 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:57:04 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 3d8b4074-ca96-4491-a33a-e57db0c9f175 does not exist
Nov 24 18:57:04 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 0455ed5a-9d2a-45a8-9673-956cc51062b6 does not exist
Nov 24 18:57:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:57:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:57:04 compute-0 sudo[295105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:57:04 compute-0 sudo[295105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:57:04 compute-0 sudo[295105]: pam_unix(sudo:session): session closed for user root
Nov 24 18:57:04 compute-0 sudo[295130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:57:04 compute-0 sudo[295130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:57:04 compute-0 sudo[295130]: pam_unix(sudo:session): session closed for user root
Nov 24 18:57:05 compute-0 ceph-mon[74927]: pgmap v1276: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:05 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:57:05 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:57:06 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1277: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:07 compute-0 ceph-mon[74927]: pgmap v1277: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:08 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1278: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:08 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:57:09 compute-0 ceph-mon[74927]: pgmap v1278: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:10 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1279: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:11 compute-0 ceph-mon[74927]: pgmap v1279: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:12 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1280: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:13 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:57:13 compute-0 ceph-mon[74927]: pgmap v1280: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:14 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1281: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:15 compute-0 ceph-mon[74927]: pgmap v1281: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:16 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1282: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:16 compute-0 nova_compute[270693]: 2025-11-24 18:57:16.524 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:57:16 compute-0 podman[295157]: 2025-11-24 18:57:16.998864662 +0000 UTC m=+0.070501875 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible)
Nov 24 18:57:17 compute-0 podman[295155]: 2025-11-24 18:57:16.99999954 +0000 UTC m=+0.081510364 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 24 18:57:17 compute-0 podman[295156]: 2025-11-24 18:57:17.035755034 +0000 UTC m=+0.123534192 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 24 18:57:17 compute-0 nova_compute[270693]: 2025-11-24 18:57:17.528 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:57:17 compute-0 nova_compute[270693]: 2025-11-24 18:57:17.530 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 18:57:17 compute-0 nova_compute[270693]: 2025-11-24 18:57:17.530 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 18:57:17 compute-0 nova_compute[270693]: 2025-11-24 18:57:17.545 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 18:57:17 compute-0 ceph-mon[74927]: pgmap v1282: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:18 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1283: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:18 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:57:18 compute-0 nova_compute[270693]: 2025-11-24 18:57:18.528 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:57:18 compute-0 nova_compute[270693]: 2025-11-24 18:57:18.580 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:57:18 compute-0 nova_compute[270693]: 2025-11-24 18:57:18.581 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:57:18 compute-0 nova_compute[270693]: 2025-11-24 18:57:18.581 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:57:18 compute-0 nova_compute[270693]: 2025-11-24 18:57:18.581 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 18:57:18 compute-0 nova_compute[270693]: 2025-11-24 18:57:18.581 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:57:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 18:57:19 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/223273830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:57:19 compute-0 nova_compute[270693]: 2025-11-24 18:57:19.043 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:57:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 18:57:19 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4117175616' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 18:57:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 18:57:19 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4117175616' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 18:57:19 compute-0 nova_compute[270693]: 2025-11-24 18:57:19.219 270697 WARNING nova.virt.libvirt.driver [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 18:57:19 compute-0 nova_compute[270693]: 2025-11-24 18:57:19.220 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5011MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 18:57:19 compute-0 nova_compute[270693]: 2025-11-24 18:57:19.220 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:57:19 compute-0 nova_compute[270693]: 2025-11-24 18:57:19.221 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:57:19 compute-0 nova_compute[270693]: 2025-11-24 18:57:19.318 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 18:57:19 compute-0 nova_compute[270693]: 2025-11-24 18:57:19.318 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 18:57:19 compute-0 nova_compute[270693]: 2025-11-24 18:57:19.365 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:57:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 18:57:19 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2532253814' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:57:19 compute-0 nova_compute[270693]: 2025-11-24 18:57:19.807 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:57:19 compute-0 nova_compute[270693]: 2025-11-24 18:57:19.813 270697 DEBUG nova.compute.provider_tree [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed in ProviderTree for provider: d1cce7ec-de83-4810-91f8-1852891da8a6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 18:57:19 compute-0 nova_compute[270693]: 2025-11-24 18:57:19.840 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed for provider d1cce7ec-de83-4810-91f8-1852891da8a6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 18:57:19 compute-0 nova_compute[270693]: 2025-11-24 18:57:19.841 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 18:57:19 compute-0 nova_compute[270693]: 2025-11-24 18:57:19.841 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:57:19 compute-0 ceph-mon[74927]: pgmap v1283: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:19 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/223273830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:57:19 compute-0 ceph-mon[74927]: from='client.? 192.168.122.10:0/4117175616' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 18:57:19 compute-0 ceph-mon[74927]: from='client.? 192.168.122.10:0/4117175616' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 18:57:19 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2532253814' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:57:20 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1284: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:21 compute-0 nova_compute[270693]: 2025-11-24 18:57:21.838 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:57:21 compute-0 nova_compute[270693]: 2025-11-24 18:57:21.856 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:57:21 compute-0 nova_compute[270693]: 2025-11-24 18:57:21.857 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:57:21 compute-0 nova_compute[270693]: 2025-11-24 18:57:21.857 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 18:57:21 compute-0 ceph-mon[74927]: pgmap v1284: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:22 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1285: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:22 compute-0 nova_compute[270693]: 2025-11-24 18:57:22.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:57:22 compute-0 nova_compute[270693]: 2025-11-24 18:57:22.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:57:22 compute-0 nova_compute[270693]: 2025-11-24 18:57:22.530 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:57:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:57:22.754 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:57:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:57:22.755 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:57:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:57:22.755 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:57:23 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:57:23 compute-0 ceph-mon[74927]: pgmap v1285: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:24 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1286: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:24 compute-0 nova_compute[270693]: 2025-11-24 18:57:24.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:57:25 compute-0 ceph-mon[74927]: pgmap v1286: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:26 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1287: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:27 compute-0 ceph-mon[74927]: pgmap v1287: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:28 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1288: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:28 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:57:29 compute-0 ceph-mon[74927]: pgmap v1288: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:30 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1289: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:31 compute-0 ceph-mon[74927]: pgmap v1289: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:32 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1290: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:33 compute-0 ceph-mon[74927]: pgmap v1290: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:33 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:57:34 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1291: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:57:34
Nov 24 18:57:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:57:34 compute-0 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 18:57:34 compute-0 ceph-mgr[75218]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'default.rgw.meta', '.rgw.root', 'vms', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'backups', 'default.rgw.log']
Nov 24 18:57:34 compute-0 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 18:57:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:57:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:57:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:57:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:57:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:57:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:57:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:57:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:57:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:57:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:57:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:57:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:57:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:57:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:57:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:57:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:57:35 compute-0 ceph-mon[74927]: pgmap v1291: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:36 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1292: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:37 compute-0 ceph-mon[74927]: pgmap v1292: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:38 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1293: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:38 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:57:39 compute-0 ceph-mon[74927]: pgmap v1293: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:40 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1294: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:41 compute-0 ceph-mon[74927]: pgmap v1294: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:42 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1295: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:43 compute-0 ceph-mon[74927]: pgmap v1295: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:43 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:57:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:57:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:57:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:57:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:57:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 18:57:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:57:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 18:57:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:57:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:57:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:57:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 18:57:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:57:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 18:57:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:57:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:57:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:57:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 18:57:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:57:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 18:57:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:57:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:57:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:57:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 18:57:44 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1296: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:45 compute-0 ceph-mon[74927]: pgmap v1296: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:46 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1297: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:47 compute-0 ceph-mon[74927]: pgmap v1297: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:47 compute-0 podman[295262]: 2025-11-24 18:57:47.973584158 +0000 UTC m=+0.060229363 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 18:57:47 compute-0 podman[295260]: 2025-11-24 18:57:47.992432609 +0000 UTC m=+0.069928241 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 24 18:57:48 compute-0 podman[295261]: 2025-11-24 18:57:48.019817919 +0000 UTC m=+0.106433183 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 18:57:48 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1298: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:48 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:57:49 compute-0 ceph-mon[74927]: pgmap v1298: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:50 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1299: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:51 compute-0 ceph-mon[74927]: pgmap v1299: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:52 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1300: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:53 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:57:53 compute-0 ceph-mon[74927]: pgmap v1300: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:54 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1301: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:55 compute-0 ceph-mon[74927]: pgmap v1301: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:56 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1302: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:57 compute-0 ceph-mon[74927]: pgmap v1302: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:58 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1303: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:57:58 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:57:59 compute-0 ceph-mon[74927]: pgmap v1303: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:00 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1304: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:01 compute-0 ceph-mon[74927]: pgmap v1304: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:02 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1305: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:03 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:58:03 compute-0 ceph-mon[74927]: pgmap v1305: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:04 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1306: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:58:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:58:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:58:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:58:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:58:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:58:04 compute-0 sudo[295323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:58:04 compute-0 sudo[295323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:58:05 compute-0 sudo[295323]: pam_unix(sudo:session): session closed for user root
Nov 24 18:58:05 compute-0 sudo[295348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:58:05 compute-0 sudo[295348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:58:05 compute-0 sudo[295348]: pam_unix(sudo:session): session closed for user root
Nov 24 18:58:05 compute-0 sudo[295373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:58:05 compute-0 sudo[295373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:58:05 compute-0 sudo[295373]: pam_unix(sudo:session): session closed for user root
Nov 24 18:58:05 compute-0 sudo[295398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 18:58:05 compute-0 sudo[295398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:58:05 compute-0 ceph-mon[74927]: pgmap v1306: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:05 compute-0 sudo[295398]: pam_unix(sudo:session): session closed for user root
Nov 24 18:58:05 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:58:05 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:58:05 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:58:05 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:58:05 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:58:05 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:58:05 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 5ecfe9f6-7272-44da-a186-8edc7dd77885 does not exist
Nov 24 18:58:05 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 41f12385-01f5-4023-a589-89c263714029 does not exist
Nov 24 18:58:05 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 5747bc10-5f64-405d-9c84-08ec96afeb9f does not exist
Nov 24 18:58:05 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 18:58:05 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:58:05 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 18:58:05 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:58:05 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:58:05 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:58:05 compute-0 sudo[295454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:58:05 compute-0 sudo[295454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:58:05 compute-0 sudo[295454]: pam_unix(sudo:session): session closed for user root
Nov 24 18:58:05 compute-0 sudo[295479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:58:05 compute-0 sudo[295479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:58:05 compute-0 sudo[295479]: pam_unix(sudo:session): session closed for user root
Nov 24 18:58:06 compute-0 sudo[295504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:58:06 compute-0 sudo[295504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:58:06 compute-0 sudo[295504]: pam_unix(sudo:session): session closed for user root
Nov 24 18:58:06 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1307: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:06 compute-0 sudo[295529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 18:58:06 compute-0 sudo[295529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:58:06 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:58:06 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:58:06 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:58:06 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:58:06 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:58:06 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:58:06 compute-0 podman[295595]: 2025-11-24 18:58:06.469180214 +0000 UTC m=+0.053423098 container create 82228a4818fe2813dc9ff9c2772a99b4c65739a2e2eb7ea290907d5cb9886036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 18:58:06 compute-0 systemd[1]: Started libpod-conmon-82228a4818fe2813dc9ff9c2772a99b4c65739a2e2eb7ea290907d5cb9886036.scope.
Nov 24 18:58:06 compute-0 podman[295595]: 2025-11-24 18:58:06.442234245 +0000 UTC m=+0.026477169 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:58:06 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:58:06 compute-0 podman[295595]: 2025-11-24 18:58:06.574100569 +0000 UTC m=+0.158343503 container init 82228a4818fe2813dc9ff9c2772a99b4c65739a2e2eb7ea290907d5cb9886036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Nov 24 18:58:06 compute-0 podman[295595]: 2025-11-24 18:58:06.585951429 +0000 UTC m=+0.170194313 container start 82228a4818fe2813dc9ff9c2772a99b4c65739a2e2eb7ea290907d5cb9886036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wright, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:58:06 compute-0 podman[295595]: 2025-11-24 18:58:06.589863255 +0000 UTC m=+0.174106099 container attach 82228a4818fe2813dc9ff9c2772a99b4c65739a2e2eb7ea290907d5cb9886036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wright, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:58:06 compute-0 distracted_wright[295612]: 167 167
Nov 24 18:58:06 compute-0 systemd[1]: libpod-82228a4818fe2813dc9ff9c2772a99b4c65739a2e2eb7ea290907d5cb9886036.scope: Deactivated successfully.
Nov 24 18:58:06 compute-0 podman[295595]: 2025-11-24 18:58:06.594201111 +0000 UTC m=+0.178443955 container died 82228a4818fe2813dc9ff9c2772a99b4c65739a2e2eb7ea290907d5cb9886036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wright, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:58:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cfb24ce46df06484255380deac1e3d43134a9739ebf71b04a10c32837d5932c-merged.mount: Deactivated successfully.
Nov 24 18:58:06 compute-0 podman[295595]: 2025-11-24 18:58:06.632008555 +0000 UTC m=+0.216251399 container remove 82228a4818fe2813dc9ff9c2772a99b4c65739a2e2eb7ea290907d5cb9886036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 18:58:06 compute-0 systemd[1]: libpod-conmon-82228a4818fe2813dc9ff9c2772a99b4c65739a2e2eb7ea290907d5cb9886036.scope: Deactivated successfully.
Nov 24 18:58:06 compute-0 podman[295636]: 2025-11-24 18:58:06.82157599 +0000 UTC m=+0.065665587 container create bd01bb15661836e7d3350ce0c5b17e58ca1c7ccb7dda5672c12139b3fcc6dea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 18:58:06 compute-0 systemd[1]: Started libpod-conmon-bd01bb15661836e7d3350ce0c5b17e58ca1c7ccb7dda5672c12139b3fcc6dea1.scope.
Nov 24 18:58:06 compute-0 podman[295636]: 2025-11-24 18:58:06.795301388 +0000 UTC m=+0.039391005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:58:06 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:58:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed78ca1f382d8c36b414b0edfa1229495cd156d84fd47bd7ab374ebefa3f8fa5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:58:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed78ca1f382d8c36b414b0edfa1229495cd156d84fd47bd7ab374ebefa3f8fa5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:58:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed78ca1f382d8c36b414b0edfa1229495cd156d84fd47bd7ab374ebefa3f8fa5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:58:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed78ca1f382d8c36b414b0edfa1229495cd156d84fd47bd7ab374ebefa3f8fa5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:58:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed78ca1f382d8c36b414b0edfa1229495cd156d84fd47bd7ab374ebefa3f8fa5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:58:06 compute-0 podman[295636]: 2025-11-24 18:58:06.924726612 +0000 UTC m=+0.168816249 container init bd01bb15661836e7d3350ce0c5b17e58ca1c7ccb7dda5672c12139b3fcc6dea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ptolemy, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 24 18:58:06 compute-0 podman[295636]: 2025-11-24 18:58:06.941937003 +0000 UTC m=+0.186026580 container start bd01bb15661836e7d3350ce0c5b17e58ca1c7ccb7dda5672c12139b3fcc6dea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ptolemy, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 24 18:58:06 compute-0 podman[295636]: 2025-11-24 18:58:06.94547057 +0000 UTC m=+0.189560147 container attach bd01bb15661836e7d3350ce0c5b17e58ca1c7ccb7dda5672c12139b3fcc6dea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 24 18:58:07 compute-0 ceph-mon[74927]: pgmap v1307: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:08 compute-0 keen_ptolemy[295652]: --> passed data devices: 0 physical, 3 LVM
Nov 24 18:58:08 compute-0 keen_ptolemy[295652]: --> relative data size: 1.0
Nov 24 18:58:08 compute-0 keen_ptolemy[295652]: --> All data devices are unavailable
Nov 24 18:58:08 compute-0 systemd[1]: libpod-bd01bb15661836e7d3350ce0c5b17e58ca1c7ccb7dda5672c12139b3fcc6dea1.scope: Deactivated successfully.
Nov 24 18:58:08 compute-0 systemd[1]: libpod-bd01bb15661836e7d3350ce0c5b17e58ca1c7ccb7dda5672c12139b3fcc6dea1.scope: Consumed 1.019s CPU time.
Nov 24 18:58:08 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1308: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:08 compute-0 podman[295681]: 2025-11-24 18:58:08.06078048 +0000 UTC m=+0.025194577 container died bd01bb15661836e7d3350ce0c5b17e58ca1c7ccb7dda5672c12139b3fcc6dea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ptolemy, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 18:58:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed78ca1f382d8c36b414b0edfa1229495cd156d84fd47bd7ab374ebefa3f8fa5-merged.mount: Deactivated successfully.
Nov 24 18:58:08 compute-0 podman[295681]: 2025-11-24 18:58:08.116919602 +0000 UTC m=+0.081333669 container remove bd01bb15661836e7d3350ce0c5b17e58ca1c7ccb7dda5672c12139b3fcc6dea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Nov 24 18:58:08 compute-0 systemd[1]: libpod-conmon-bd01bb15661836e7d3350ce0c5b17e58ca1c7ccb7dda5672c12139b3fcc6dea1.scope: Deactivated successfully.
Nov 24 18:58:08 compute-0 sudo[295529]: pam_unix(sudo:session): session closed for user root
Nov 24 18:58:08 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:58:08 compute-0 sudo[295696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:58:08 compute-0 sudo[295696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:58:08 compute-0 sudo[295696]: pam_unix(sudo:session): session closed for user root
Nov 24 18:58:08 compute-0 sudo[295721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:58:08 compute-0 sudo[295721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:58:08 compute-0 sudo[295721]: pam_unix(sudo:session): session closed for user root
Nov 24 18:58:08 compute-0 sudo[295746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:58:08 compute-0 sudo[295746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:58:08 compute-0 sudo[295746]: pam_unix(sudo:session): session closed for user root
Nov 24 18:58:08 compute-0 sudo[295771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- lvm list --format json
Nov 24 18:58:08 compute-0 sudo[295771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:58:08 compute-0 podman[295836]: 2025-11-24 18:58:08.901303941 +0000 UTC m=+0.037927828 container create 56efcc867fef987a955c283b52bc70867fd0f53977485940bc2414392b1883e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lovelace, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:58:08 compute-0 systemd[1]: Started libpod-conmon-56efcc867fef987a955c283b52bc70867fd0f53977485940bc2414392b1883e0.scope.
Nov 24 18:58:08 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:58:08 compute-0 podman[295836]: 2025-11-24 18:58:08.88613102 +0000 UTC m=+0.022754927 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:58:08 compute-0 podman[295836]: 2025-11-24 18:58:08.988518533 +0000 UTC m=+0.125142430 container init 56efcc867fef987a955c283b52bc70867fd0f53977485940bc2414392b1883e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lovelace, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:58:08 compute-0 podman[295836]: 2025-11-24 18:58:08.998571119 +0000 UTC m=+0.135195006 container start 56efcc867fef987a955c283b52bc70867fd0f53977485940bc2414392b1883e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 18:58:09 compute-0 podman[295836]: 2025-11-24 18:58:09.001652575 +0000 UTC m=+0.138276482 container attach 56efcc867fef987a955c283b52bc70867fd0f53977485940bc2414392b1883e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Nov 24 18:58:09 compute-0 gifted_lovelace[295852]: 167 167
Nov 24 18:58:09 compute-0 systemd[1]: libpod-56efcc867fef987a955c283b52bc70867fd0f53977485940bc2414392b1883e0.scope: Deactivated successfully.
Nov 24 18:58:09 compute-0 podman[295836]: 2025-11-24 18:58:09.002672009 +0000 UTC m=+0.139295896 container died 56efcc867fef987a955c283b52bc70867fd0f53977485940bc2414392b1883e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lovelace, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:58:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a6865d003bff5fb427d29a34c7bd7df59857945c5a23d819011234f14e6586f-merged.mount: Deactivated successfully.
Nov 24 18:58:09 compute-0 podman[295836]: 2025-11-24 18:58:09.047461505 +0000 UTC m=+0.184085422 container remove 56efcc867fef987a955c283b52bc70867fd0f53977485940bc2414392b1883e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lovelace, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:58:09 compute-0 systemd[1]: libpod-conmon-56efcc867fef987a955c283b52bc70867fd0f53977485940bc2414392b1883e0.scope: Deactivated successfully.
Nov 24 18:58:09 compute-0 podman[295877]: 2025-11-24 18:58:09.213951405 +0000 UTC m=+0.036114864 container create fb3bc03dfb93ccfeceec76eb01548db170610cd8d4da212f436152f551d0f746 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_williams, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:58:09 compute-0 systemd[1]: Started libpod-conmon-fb3bc03dfb93ccfeceec76eb01548db170610cd8d4da212f436152f551d0f746.scope.
Nov 24 18:58:09 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:58:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62a836b02d5f19e0007a17438b1fd069a204ff9c1fdd6accd6326280c8e1f7be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:58:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62a836b02d5f19e0007a17438b1fd069a204ff9c1fdd6accd6326280c8e1f7be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:58:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62a836b02d5f19e0007a17438b1fd069a204ff9c1fdd6accd6326280c8e1f7be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:58:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62a836b02d5f19e0007a17438b1fd069a204ff9c1fdd6accd6326280c8e1f7be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:58:09 compute-0 podman[295877]: 2025-11-24 18:58:09.293364827 +0000 UTC m=+0.115528286 container init fb3bc03dfb93ccfeceec76eb01548db170610cd8d4da212f436152f551d0f746 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_williams, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:58:09 compute-0 podman[295877]: 2025-11-24 18:58:09.19940738 +0000 UTC m=+0.021570859 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:58:09 compute-0 podman[295877]: 2025-11-24 18:58:09.31024423 +0000 UTC m=+0.132407689 container start fb3bc03dfb93ccfeceec76eb01548db170610cd8d4da212f436152f551d0f746 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_williams, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:58:09 compute-0 podman[295877]: 2025-11-24 18:58:09.313268424 +0000 UTC m=+0.135431883 container attach fb3bc03dfb93ccfeceec76eb01548db170610cd8d4da212f436152f551d0f746 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_williams, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:58:09 compute-0 ceph-mon[74927]: pgmap v1308: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:10 compute-0 peaceful_williams[295894]: {
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:     "0": [
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:         {
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:             "devices": [
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:                 "/dev/loop3"
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:             ],
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:             "lv_name": "ceph_lv0",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:             "lv_size": "21470642176",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:             "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:             "name": "ceph_lv0",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:             "tags": {
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:                 "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:                 "ceph.cluster_name": "ceph",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:                 "ceph.crush_device_class": "",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:                 "ceph.encrypted": "0",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:                 "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:                 "ceph.osd_id": "0",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:                 "ceph.type": "block",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:                 "ceph.vdo": "0"
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:             },
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:             "type": "block",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:             "vg_name": "ceph_vg0"
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:         }
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:     ],
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:     "1": [
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:         {
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:             "devices": [
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:                 "/dev/loop4"
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:             ],
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:             "lv_name": "ceph_lv1",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:             "lv_size": "21470642176",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:             "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:             "name": "ceph_lv1",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:             "tags": {
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:                 "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:                 "ceph.cluster_name": "ceph",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:                 "ceph.crush_device_class": "",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:                 "ceph.encrypted": "0",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:                 "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:                 "ceph.osd_id": "1",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:                 "ceph.type": "block",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:                 "ceph.vdo": "0"
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:             },
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:             "type": "block",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:             "vg_name": "ceph_vg1"
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:         }
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:     ],
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:     "2": [
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:         {
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:             "devices": [
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:                 "/dev/loop5"
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:             ],
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:             "lv_name": "ceph_lv2",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:             "lv_size": "21470642176",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:             "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:             "name": "ceph_lv2",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:             "tags": {
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:                 "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:                 "ceph.cluster_name": "ceph",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:                 "ceph.crush_device_class": "",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:                 "ceph.encrypted": "0",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:                 "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:                 "ceph.osd_id": "2",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:                 "ceph.type": "block",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:                 "ceph.vdo": "0"
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:             },
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:             "type": "block",
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:             "vg_name": "ceph_vg2"
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:         }
Nov 24 18:58:10 compute-0 peaceful_williams[295894]:     ]
Nov 24 18:58:10 compute-0 peaceful_williams[295894]: }
Nov 24 18:58:10 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1309: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:10 compute-0 systemd[1]: libpod-fb3bc03dfb93ccfeceec76eb01548db170610cd8d4da212f436152f551d0f746.scope: Deactivated successfully.
Nov 24 18:58:10 compute-0 podman[295877]: 2025-11-24 18:58:10.081427765 +0000 UTC m=+0.903591244 container died fb3bc03dfb93ccfeceec76eb01548db170610cd8d4da212f436152f551d0f746 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:58:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-62a836b02d5f19e0007a17438b1fd069a204ff9c1fdd6accd6326280c8e1f7be-merged.mount: Deactivated successfully.
Nov 24 18:58:10 compute-0 podman[295877]: 2025-11-24 18:58:10.146876845 +0000 UTC m=+0.969040314 container remove fb3bc03dfb93ccfeceec76eb01548db170610cd8d4da212f436152f551d0f746 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_williams, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 18:58:10 compute-0 systemd[1]: libpod-conmon-fb3bc03dfb93ccfeceec76eb01548db170610cd8d4da212f436152f551d0f746.scope: Deactivated successfully.
Nov 24 18:58:10 compute-0 sudo[295771]: pam_unix(sudo:session): session closed for user root
Nov 24 18:58:10 compute-0 sudo[295916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:58:10 compute-0 sudo[295916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:58:10 compute-0 sudo[295916]: pam_unix(sudo:session): session closed for user root
Nov 24 18:58:10 compute-0 sudo[295941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:58:10 compute-0 sudo[295941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:58:10 compute-0 sudo[295941]: pam_unix(sudo:session): session closed for user root
Nov 24 18:58:10 compute-0 sudo[295966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:58:10 compute-0 sudo[295966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:58:10 compute-0 sudo[295966]: pam_unix(sudo:session): session closed for user root
Nov 24 18:58:10 compute-0 sudo[295991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- raw list --format json
Nov 24 18:58:10 compute-0 sudo[295991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:58:10 compute-0 podman[296055]: 2025-11-24 18:58:10.7196594 +0000 UTC m=+0.039344633 container create 4581613fec3c8f9c9b7b510331e2f871ddd96bb52748fe696a2d513d6903e0ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hopper, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Nov 24 18:58:10 compute-0 systemd[1]: Started libpod-conmon-4581613fec3c8f9c9b7b510331e2f871ddd96bb52748fe696a2d513d6903e0ca.scope.
Nov 24 18:58:10 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:58:10 compute-0 podman[296055]: 2025-11-24 18:58:10.69799304 +0000 UTC m=+0.017678273 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:58:10 compute-0 podman[296055]: 2025-11-24 18:58:10.796596041 +0000 UTC m=+0.116281304 container init 4581613fec3c8f9c9b7b510331e2f871ddd96bb52748fe696a2d513d6903e0ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hopper, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:58:10 compute-0 podman[296055]: 2025-11-24 18:58:10.802657349 +0000 UTC m=+0.122342582 container start 4581613fec3c8f9c9b7b510331e2f871ddd96bb52748fe696a2d513d6903e0ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:58:10 compute-0 podman[296055]: 2025-11-24 18:58:10.805709554 +0000 UTC m=+0.125394847 container attach 4581613fec3c8f9c9b7b510331e2f871ddd96bb52748fe696a2d513d6903e0ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hopper, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:58:10 compute-0 strange_hopper[296071]: 167 167
Nov 24 18:58:10 compute-0 systemd[1]: libpod-4581613fec3c8f9c9b7b510331e2f871ddd96bb52748fe696a2d513d6903e0ca.scope: Deactivated successfully.
Nov 24 18:58:10 compute-0 podman[296055]: 2025-11-24 18:58:10.809409365 +0000 UTC m=+0.129094638 container died 4581613fec3c8f9c9b7b510331e2f871ddd96bb52748fe696a2d513d6903e0ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:58:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ce32cbc2e4064baeb6ce532c9c04b359a1b82583794a41c86409fbe077a468a-merged.mount: Deactivated successfully.
Nov 24 18:58:10 compute-0 podman[296055]: 2025-11-24 18:58:10.847369513 +0000 UTC m=+0.167054746 container remove 4581613fec3c8f9c9b7b510331e2f871ddd96bb52748fe696a2d513d6903e0ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hopper, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:58:10 compute-0 systemd[1]: libpod-conmon-4581613fec3c8f9c9b7b510331e2f871ddd96bb52748fe696a2d513d6903e0ca.scope: Deactivated successfully.
Nov 24 18:58:11 compute-0 podman[296094]: 2025-11-24 18:58:11.006028122 +0000 UTC m=+0.038054401 container create c90fa050e9e7bda91466fe7d4df5924fd861fa86cc1042864eea68a35c9a4777 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_nash, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:58:11 compute-0 systemd[1]: Started libpod-conmon-c90fa050e9e7bda91466fe7d4df5924fd861fa86cc1042864eea68a35c9a4777.scope.
Nov 24 18:58:11 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c348d70639cf0c1b22ef428aecb2533c9f5e3d2adc2f73222a9a94aea11139/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c348d70639cf0c1b22ef428aecb2533c9f5e3d2adc2f73222a9a94aea11139/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c348d70639cf0c1b22ef428aecb2533c9f5e3d2adc2f73222a9a94aea11139/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c348d70639cf0c1b22ef428aecb2533c9f5e3d2adc2f73222a9a94aea11139/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:58:11 compute-0 podman[296094]: 2025-11-24 18:58:11.083414064 +0000 UTC m=+0.115440373 container init c90fa050e9e7bda91466fe7d4df5924fd861fa86cc1042864eea68a35c9a4777 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 18:58:11 compute-0 podman[296094]: 2025-11-24 18:58:10.992183973 +0000 UTC m=+0.024210262 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:58:11 compute-0 podman[296094]: 2025-11-24 18:58:11.089217036 +0000 UTC m=+0.121243325 container start c90fa050e9e7bda91466fe7d4df5924fd861fa86cc1042864eea68a35c9a4777 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_nash, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 18:58:11 compute-0 podman[296094]: 2025-11-24 18:58:11.093009269 +0000 UTC m=+0.125035578 container attach c90fa050e9e7bda91466fe7d4df5924fd861fa86cc1042864eea68a35c9a4777 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_nash, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:58:11 compute-0 ceph-mon[74927]: pgmap v1309: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:12 compute-0 modest_nash[296110]: {
Nov 24 18:58:12 compute-0 modest_nash[296110]:     "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 18:58:12 compute-0 modest_nash[296110]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:58:12 compute-0 modest_nash[296110]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 18:58:12 compute-0 modest_nash[296110]:         "osd_id": 0,
Nov 24 18:58:12 compute-0 modest_nash[296110]:         "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:58:12 compute-0 modest_nash[296110]:         "type": "bluestore"
Nov 24 18:58:12 compute-0 modest_nash[296110]:     },
Nov 24 18:58:12 compute-0 modest_nash[296110]:     "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 18:58:12 compute-0 modest_nash[296110]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:58:12 compute-0 modest_nash[296110]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 18:58:12 compute-0 modest_nash[296110]:         "osd_id": 1,
Nov 24 18:58:12 compute-0 modest_nash[296110]:         "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:58:12 compute-0 modest_nash[296110]:         "type": "bluestore"
Nov 24 18:58:12 compute-0 modest_nash[296110]:     },
Nov 24 18:58:12 compute-0 modest_nash[296110]:     "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 18:58:12 compute-0 modest_nash[296110]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:58:12 compute-0 modest_nash[296110]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 18:58:12 compute-0 modest_nash[296110]:         "osd_id": 2,
Nov 24 18:58:12 compute-0 modest_nash[296110]:         "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:58:12 compute-0 modest_nash[296110]:         "type": "bluestore"
Nov 24 18:58:12 compute-0 modest_nash[296110]:     }
Nov 24 18:58:12 compute-0 modest_nash[296110]: }
Nov 24 18:58:12 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1310: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:12 compute-0 systemd[1]: libpod-c90fa050e9e7bda91466fe7d4df5924fd861fa86cc1042864eea68a35c9a4777.scope: Deactivated successfully.
Nov 24 18:58:12 compute-0 podman[296094]: 2025-11-24 18:58:12.074559748 +0000 UTC m=+1.106586037 container died c90fa050e9e7bda91466fe7d4df5924fd861fa86cc1042864eea68a35c9a4777 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_nash, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 18:58:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4c348d70639cf0c1b22ef428aecb2533c9f5e3d2adc2f73222a9a94aea11139-merged.mount: Deactivated successfully.
Nov 24 18:58:12 compute-0 podman[296094]: 2025-11-24 18:58:12.133280884 +0000 UTC m=+1.165307183 container remove c90fa050e9e7bda91466fe7d4df5924fd861fa86cc1042864eea68a35c9a4777 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:58:12 compute-0 systemd[1]: libpod-conmon-c90fa050e9e7bda91466fe7d4df5924fd861fa86cc1042864eea68a35c9a4777.scope: Deactivated successfully.
Nov 24 18:58:12 compute-0 sudo[295991]: pam_unix(sudo:session): session closed for user root
Nov 24 18:58:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:58:12 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:58:12 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:58:12 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:58:12 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev aa72c128-d6e2-40bd-9a2a-12bbdab8a3d7 does not exist
Nov 24 18:58:12 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 4ceddd02-442c-4ed0-bc5d-b9b08ebdcdca does not exist
Nov 24 18:58:12 compute-0 sudo[296155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:58:12 compute-0 sudo[296155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:58:12 compute-0 sudo[296155]: pam_unix(sudo:session): session closed for user root
Nov 24 18:58:12 compute-0 sudo[296180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:58:12 compute-0 sudo[296180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:58:12 compute-0 sudo[296180]: pam_unix(sudo:session): session closed for user root
Nov 24 18:58:13 compute-0 ceph-mon[74927]: pgmap v1310: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:13 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:58:13 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:58:13 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:58:14 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1311: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:15 compute-0 ceph-mon[74927]: pgmap v1311: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:16 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1312: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:17 compute-0 ceph-mon[74927]: pgmap v1312: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:17 compute-0 nova_compute[270693]: 2025-11-24 18:58:17.523 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:58:18 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1313: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:18 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:58:18 compute-0 nova_compute[270693]: 2025-11-24 18:58:18.528 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:58:18 compute-0 nova_compute[270693]: 2025-11-24 18:58:18.563 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:58:18 compute-0 nova_compute[270693]: 2025-11-24 18:58:18.563 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:58:18 compute-0 nova_compute[270693]: 2025-11-24 18:58:18.564 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:58:18 compute-0 nova_compute[270693]: 2025-11-24 18:58:18.564 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 18:58:18 compute-0 nova_compute[270693]: 2025-11-24 18:58:18.565 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:58:19 compute-0 podman[296225]: 2025-11-24 18:58:19.000787618 +0000 UTC m=+0.080384437 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 18:58:19 compute-0 podman[296227]: 2025-11-24 18:58:19.025782259 +0000 UTC m=+0.098772926 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:58:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 18:58:19 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2804493441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:58:19 compute-0 nova_compute[270693]: 2025-11-24 18:58:19.047 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:58:19 compute-0 podman[296226]: 2025-11-24 18:58:19.067888698 +0000 UTC m=+0.144904704 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 24 18:58:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 18:58:19 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1061668998' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 18:58:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 18:58:19 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1061668998' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 18:58:19 compute-0 ceph-mon[74927]: pgmap v1313: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:19 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2804493441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:58:19 compute-0 ceph-mon[74927]: from='client.? 192.168.122.10:0/1061668998' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 18:58:19 compute-0 ceph-mon[74927]: from='client.? 192.168.122.10:0/1061668998' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 18:58:19 compute-0 nova_compute[270693]: 2025-11-24 18:58:19.243 270697 WARNING nova.virt.libvirt.driver [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 18:58:19 compute-0 nova_compute[270693]: 2025-11-24 18:58:19.245 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4984MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 18:58:19 compute-0 nova_compute[270693]: 2025-11-24 18:58:19.245 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:58:19 compute-0 nova_compute[270693]: 2025-11-24 18:58:19.245 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:58:19 compute-0 nova_compute[270693]: 2025-11-24 18:58:19.308 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 18:58:19 compute-0 nova_compute[270693]: 2025-11-24 18:58:19.308 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 18:58:19 compute-0 nova_compute[270693]: 2025-11-24 18:58:19.331 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:58:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 18:58:19 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3810146975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:58:19 compute-0 nova_compute[270693]: 2025-11-24 18:58:19.811 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:58:19 compute-0 nova_compute[270693]: 2025-11-24 18:58:19.816 270697 DEBUG nova.compute.provider_tree [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed in ProviderTree for provider: d1cce7ec-de83-4810-91f8-1852891da8a6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 18:58:19 compute-0 nova_compute[270693]: 2025-11-24 18:58:19.834 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed for provider d1cce7ec-de83-4810-91f8-1852891da8a6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 18:58:19 compute-0 nova_compute[270693]: 2025-11-24 18:58:19.836 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 18:58:19 compute-0 nova_compute[270693]: 2025-11-24 18:58:19.836 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:58:19 compute-0 nova_compute[270693]: 2025-11-24 18:58:19.836 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:58:19 compute-0 nova_compute[270693]: 2025-11-24 18:58:19.837 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 24 18:58:20 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1314: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:20 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3810146975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:58:20 compute-0 nova_compute[270693]: 2025-11-24 18:58:20.856 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:58:20 compute-0 nova_compute[270693]: 2025-11-24 18:58:20.856 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 18:58:20 compute-0 nova_compute[270693]: 2025-11-24 18:58:20.857 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 18:58:20 compute-0 nova_compute[270693]: 2025-11-24 18:58:20.879 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 18:58:21 compute-0 ceph-mon[74927]: pgmap v1314: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:21 compute-0 nova_compute[270693]: 2025-11-24 18:58:21.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:58:22 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1315: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:22 compute-0 nova_compute[270693]: 2025-11-24 18:58:22.528 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:58:22 compute-0 nova_compute[270693]: 2025-11-24 18:58:22.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:58:22 compute-0 nova_compute[270693]: 2025-11-24 18:58:22.529 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 18:58:22 compute-0 nova_compute[270693]: 2025-11-24 18:58:22.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:58:22 compute-0 nova_compute[270693]: 2025-11-24 18:58:22.530 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 24 18:58:22 compute-0 nova_compute[270693]: 2025-11-24 18:58:22.543 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 24 18:58:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:58:22.756 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:58:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:58:22.756 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:58:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:58:22.757 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:58:23 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:58:23 compute-0 ceph-mon[74927]: pgmap v1315: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:23 compute-0 nova_compute[270693]: 2025-11-24 18:58:23.543 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:58:24 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1316: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:24 compute-0 nova_compute[270693]: 2025-11-24 18:58:24.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:58:25 compute-0 ceph-mon[74927]: pgmap v1316: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:26 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1317: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:26 compute-0 nova_compute[270693]: 2025-11-24 18:58:26.528 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:58:26 compute-0 nova_compute[270693]: 2025-11-24 18:58:26.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:58:27 compute-0 ceph-mon[74927]: pgmap v1317: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:28 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1318: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:28 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:58:29 compute-0 ceph-mon[74927]: pgmap v1318: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:30 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1319: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:31 compute-0 ceph-mon[74927]: pgmap v1319: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:32 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1320: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:33 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:58:33 compute-0 ceph-mon[74927]: pgmap v1320: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:34 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1321: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:58:34
Nov 24 18:58:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:58:34 compute-0 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 18:58:34 compute-0 ceph-mgr[75218]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'images', 'backups']
Nov 24 18:58:34 compute-0 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 18:58:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:58:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:58:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:58:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:58:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:58:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:58:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:58:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:58:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:58:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:58:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:58:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:58:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:58:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:58:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:58:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:58:35 compute-0 ceph-mon[74927]: pgmap v1321: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:36 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1322: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:37 compute-0 ceph-mon[74927]: pgmap v1322: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:38 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1323: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:38 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:58:39 compute-0 ceph-mon[74927]: pgmap v1323: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:40 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1324: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:41 compute-0 ceph-mon[74927]: pgmap v1324: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:42 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1325: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:43 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:58:43 compute-0 ceph-mon[74927]: pgmap v1325: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:58:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:58:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:58:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:58:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 18:58:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:58:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 18:58:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:58:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:58:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:58:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 18:58:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:58:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 18:58:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:58:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:58:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:58:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 18:58:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:58:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 18:58:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:58:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:58:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:58:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 18:58:44 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1326: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:45 compute-0 ceph-mon[74927]: pgmap v1326: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:46 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1327: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:47 compute-0 ceph-mon[74927]: pgmap v1327: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:47 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 18:58:47 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 6084 writes, 27K keys, 6084 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 6084 writes, 6084 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1448 writes, 6397 keys, 1448 commit groups, 1.0 writes per commit group, ingest: 9.34 MB, 0.02 MB/s
                                           Interval WAL: 1448 writes, 1448 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     95.6      0.32              0.09        15    0.021       0      0       0.0       0.0
                                             L6      1/0    7.09 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.4    192.6    156.6      0.65              0.29        14    0.047     64K   7856       0.0       0.0
                                            Sum      1/0    7.09 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.4    129.3    136.6      0.97              0.38        29    0.034     64K   7856       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.0    135.6    135.3      0.24              0.11         6    0.040     16K   2076       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    192.6    156.6      0.65              0.29        14    0.047     64K   7856       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     96.0      0.32              0.09        14    0.023       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     28.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.030, interval 0.006
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.13 GB write, 0.06 MB/s write, 0.12 GB read, 0.05 MB/s read, 1.0 seconds
                                           Interval compaction: 0.03 GB write, 0.05 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562af0cfd1f0#2 capacity: 304.00 MB usage: 13.80 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.00013 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1008,13.28 MB,4.3687%) FilterBlock(30,186.30 KB,0.0598456%) IndexBlock(30,345.12 KB,0.110867%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 24 18:58:48 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1328: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:48 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:58:49 compute-0 ceph-mon[74927]: pgmap v1328: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:49 compute-0 podman[296309]: 2025-11-24 18:58:49.991893524 +0000 UTC m=+0.089107090 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 24 18:58:50 compute-0 podman[296311]: 2025-11-24 18:58:50.012814955 +0000 UTC m=+0.101971844 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 18:58:50 compute-0 podman[296310]: 2025-11-24 18:58:50.047862412 +0000 UTC m=+0.131679651 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 18:58:50 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1329: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:51 compute-0 ceph-mon[74927]: pgmap v1329: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:52 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1330: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:53 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:58:53 compute-0 ceph-mon[74927]: pgmap v1330: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:54 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1331: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:55 compute-0 ceph-mon[74927]: pgmap v1331: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:56 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1332: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:57 compute-0 ceph-mon[74927]: pgmap v1332: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:58 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1333: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:58:58 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:58:59 compute-0 ceph-mon[74927]: pgmap v1333: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:00 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1334: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:01 compute-0 ceph-mon[74927]: pgmap v1334: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:02 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1335: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:03 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:59:03 compute-0 ceph-mon[74927]: pgmap v1335: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:04 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1336: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:59:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:59:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:59:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:59:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:59:04 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:59:05 compute-0 ceph-mon[74927]: pgmap v1336: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:06 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1337: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:07 compute-0 ceph-mon[74927]: pgmap v1337: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:08 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1338: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:08 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:59:09 compute-0 ceph-mon[74927]: pgmap v1338: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:10 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1339: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:11 compute-0 ceph-mon[74927]: pgmap v1339: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:12 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1340: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:12 compute-0 sudo[296374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:59:12 compute-0 sudo[296374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:59:12 compute-0 sudo[296374]: pam_unix(sudo:session): session closed for user root
Nov 24 18:59:12 compute-0 sudo[296399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:59:12 compute-0 sudo[296399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:59:12 compute-0 sudo[296399]: pam_unix(sudo:session): session closed for user root
Nov 24 18:59:12 compute-0 sudo[296424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:59:12 compute-0 sudo[296424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:59:12 compute-0 sudo[296424]: pam_unix(sudo:session): session closed for user root
Nov 24 18:59:12 compute-0 sudo[296449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 24 18:59:12 compute-0 sudo[296449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:59:13 compute-0 sudo[296449]: pam_unix(sudo:session): session closed for user root
Nov 24 18:59:13 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:59:13 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:59:13 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 18:59:13 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:59:13 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 18:59:13 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:59:13 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 36b1f83f-e10d-4975-b856-bfc81e46ef94 does not exist
Nov 24 18:59:13 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev aa2e5981-c588-4ffb-baf0-87721f6129cc does not exist
Nov 24 18:59:13 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 1d4e5dd6-e2f8-4a8d-8483-c8a71f82499c does not exist
Nov 24 18:59:13 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 18:59:13 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:59:13 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 18:59:13 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:59:13 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:59:13 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:59:13 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:59:13 compute-0 sudo[296506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:59:13 compute-0 sudo[296506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:59:13 compute-0 sudo[296506]: pam_unix(sudo:session): session closed for user root
Nov 24 18:59:13 compute-0 sudo[296531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:59:13 compute-0 sudo[296531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:59:13 compute-0 sudo[296531]: pam_unix(sudo:session): session closed for user root
Nov 24 18:59:13 compute-0 sudo[296556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:59:13 compute-0 sudo[296556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:59:13 compute-0 sudo[296556]: pam_unix(sudo:session): session closed for user root
Nov 24 18:59:13 compute-0 sudo[296581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 24 18:59:13 compute-0 sudo[296581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:59:13 compute-0 ceph-mon[74927]: pgmap v1340: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:13 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:59:13 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 18:59:13 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:59:13 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 18:59:13 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 18:59:13 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:59:13 compute-0 podman[296649]: 2025-11-24 18:59:13.789081223 +0000 UTC m=+0.045872692 container create af60b8fb2825d19fcffdb0136d197073affcef9aae476f7550010bde57ef91b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wu, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:59:13 compute-0 systemd[1]: Started libpod-conmon-af60b8fb2825d19fcffdb0136d197073affcef9aae476f7550010bde57ef91b2.scope.
Nov 24 18:59:13 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:59:13 compute-0 podman[296649]: 2025-11-24 18:59:13.770573911 +0000 UTC m=+0.027365370 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:59:13 compute-0 podman[296649]: 2025-11-24 18:59:13.876606103 +0000 UTC m=+0.133397572 container init af60b8fb2825d19fcffdb0136d197073affcef9aae476f7550010bde57ef91b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 18:59:13 compute-0 podman[296649]: 2025-11-24 18:59:13.887010547 +0000 UTC m=+0.143802036 container start af60b8fb2825d19fcffdb0136d197073affcef9aae476f7550010bde57ef91b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wu, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:59:13 compute-0 goofy_wu[296665]: 167 167
Nov 24 18:59:13 compute-0 systemd[1]: libpod-af60b8fb2825d19fcffdb0136d197073affcef9aae476f7550010bde57ef91b2.scope: Deactivated successfully.
Nov 24 18:59:13 compute-0 podman[296649]: 2025-11-24 18:59:13.891803455 +0000 UTC m=+0.148594944 container attach af60b8fb2825d19fcffdb0136d197073affcef9aae476f7550010bde57ef91b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 24 18:59:13 compute-0 podman[296649]: 2025-11-24 18:59:13.892736407 +0000 UTC m=+0.149527846 container died af60b8fb2825d19fcffdb0136d197073affcef9aae476f7550010bde57ef91b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wu, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:59:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-963f4cf3ded6cbfa2e46865697cb195d87b9a7b29eda3a4e9ae3112f3c572c77-merged.mount: Deactivated successfully.
Nov 24 18:59:13 compute-0 podman[296649]: 2025-11-24 18:59:13.934786746 +0000 UTC m=+0.191578175 container remove af60b8fb2825d19fcffdb0136d197073affcef9aae476f7550010bde57ef91b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:59:13 compute-0 systemd[1]: libpod-conmon-af60b8fb2825d19fcffdb0136d197073affcef9aae476f7550010bde57ef91b2.scope: Deactivated successfully.
Nov 24 18:59:14 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1341: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:14 compute-0 podman[296687]: 2025-11-24 18:59:14.117237407 +0000 UTC m=+0.048774434 container create 90af680f629c4b331c9c1ed9f733fb57146f74c54ed52a77a9f972a167ff8616 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 18:59:14 compute-0 systemd[1]: Started libpod-conmon-90af680f629c4b331c9c1ed9f733fb57146f74c54ed52a77a9f972a167ff8616.scope.
Nov 24 18:59:14 compute-0 podman[296687]: 2025-11-24 18:59:14.097125425 +0000 UTC m=+0.028662542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:59:14 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:59:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fdb0d51f2de07d09876b2c569e0e518479eae1d532f47f41c37ba5a33444f32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:59:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fdb0d51f2de07d09876b2c569e0e518479eae1d532f47f41c37ba5a33444f32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:59:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fdb0d51f2de07d09876b2c569e0e518479eae1d532f47f41c37ba5a33444f32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:59:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fdb0d51f2de07d09876b2c569e0e518479eae1d532f47f41c37ba5a33444f32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:59:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fdb0d51f2de07d09876b2c569e0e518479eae1d532f47f41c37ba5a33444f32/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 18:59:14 compute-0 podman[296687]: 2025-11-24 18:59:14.225037822 +0000 UTC m=+0.156574939 container init 90af680f629c4b331c9c1ed9f733fb57146f74c54ed52a77a9f972a167ff8616 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 24 18:59:14 compute-0 podman[296687]: 2025-11-24 18:59:14.233754776 +0000 UTC m=+0.165291813 container start 90af680f629c4b331c9c1ed9f733fb57146f74c54ed52a77a9f972a167ff8616 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dijkstra, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Nov 24 18:59:14 compute-0 podman[296687]: 2025-11-24 18:59:14.237496967 +0000 UTC m=+0.169034084 container attach 90af680f629c4b331c9c1ed9f733fb57146f74c54ed52a77a9f972a167ff8616 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 18:59:15 compute-0 quizzical_dijkstra[296704]: --> passed data devices: 0 physical, 3 LVM
Nov 24 18:59:15 compute-0 quizzical_dijkstra[296704]: --> relative data size: 1.0
Nov 24 18:59:15 compute-0 quizzical_dijkstra[296704]: --> All data devices are unavailable
Nov 24 18:59:15 compute-0 systemd[1]: libpod-90af680f629c4b331c9c1ed9f733fb57146f74c54ed52a77a9f972a167ff8616.scope: Deactivated successfully.
Nov 24 18:59:15 compute-0 systemd[1]: libpod-90af680f629c4b331c9c1ed9f733fb57146f74c54ed52a77a9f972a167ff8616.scope: Consumed 1.062s CPU time.
Nov 24 18:59:15 compute-0 podman[296733]: 2025-11-24 18:59:15.40773391 +0000 UTC m=+0.036857112 container died 90af680f629c4b331c9c1ed9f733fb57146f74c54ed52a77a9f972a167ff8616 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dijkstra, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:59:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fdb0d51f2de07d09876b2c569e0e518479eae1d532f47f41c37ba5a33444f32-merged.mount: Deactivated successfully.
Nov 24 18:59:15 compute-0 podman[296733]: 2025-11-24 18:59:15.481517724 +0000 UTC m=+0.110640896 container remove 90af680f629c4b331c9c1ed9f733fb57146f74c54ed52a77a9f972a167ff8616 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dijkstra, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:59:15 compute-0 systemd[1]: libpod-conmon-90af680f629c4b331c9c1ed9f733fb57146f74c54ed52a77a9f972a167ff8616.scope: Deactivated successfully.
Nov 24 18:59:15 compute-0 sudo[296581]: pam_unix(sudo:session): session closed for user root
Nov 24 18:59:15 compute-0 sudo[296748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:59:15 compute-0 sudo[296748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:59:15 compute-0 ceph-mon[74927]: pgmap v1341: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:15 compute-0 sudo[296748]: pam_unix(sudo:session): session closed for user root
Nov 24 18:59:15 compute-0 sudo[296773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:59:15 compute-0 sudo[296773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:59:15 compute-0 sudo[296773]: pam_unix(sudo:session): session closed for user root
Nov 24 18:59:15 compute-0 sudo[296798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:59:15 compute-0 sudo[296798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:59:15 compute-0 sudo[296798]: pam_unix(sudo:session): session closed for user root
Nov 24 18:59:15 compute-0 sudo[296823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- lvm list --format json
Nov 24 18:59:15 compute-0 sudo[296823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:59:16 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1342: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:16 compute-0 podman[296889]: 2025-11-24 18:59:16.207063744 +0000 UTC m=+0.061855043 container create 1a3e5b7fcdb37f7b48110a8e844299d102693564e3c37f2d9bc664e1c070de66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jang, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 24 18:59:16 compute-0 systemd[1]: Started libpod-conmon-1a3e5b7fcdb37f7b48110a8e844299d102693564e3c37f2d9bc664e1c070de66.scope.
Nov 24 18:59:16 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:59:16 compute-0 podman[296889]: 2025-11-24 18:59:16.181762716 +0000 UTC m=+0.036554065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:59:16 compute-0 podman[296889]: 2025-11-24 18:59:16.288896335 +0000 UTC m=+0.143803277 container init 1a3e5b7fcdb37f7b48110a8e844299d102693564e3c37f2d9bc664e1c070de66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jang, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 18:59:16 compute-0 podman[296889]: 2025-11-24 18:59:16.301807701 +0000 UTC m=+0.156598980 container start 1a3e5b7fcdb37f7b48110a8e844299d102693564e3c37f2d9bc664e1c070de66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jang, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:59:16 compute-0 podman[296889]: 2025-11-24 18:59:16.306184128 +0000 UTC m=+0.160975477 container attach 1a3e5b7fcdb37f7b48110a8e844299d102693564e3c37f2d9bc664e1c070de66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 24 18:59:16 compute-0 suspicious_jang[296905]: 167 167
Nov 24 18:59:16 compute-0 systemd[1]: libpod-1a3e5b7fcdb37f7b48110a8e844299d102693564e3c37f2d9bc664e1c070de66.scope: Deactivated successfully.
Nov 24 18:59:16 compute-0 podman[296889]: 2025-11-24 18:59:16.307413828 +0000 UTC m=+0.162205097 container died 1a3e5b7fcdb37f7b48110a8e844299d102693564e3c37f2d9bc664e1c070de66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 24 18:59:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-b456baebdc80d427952cb544439d875d3a8453e8032b4868227f93cabce9d067-merged.mount: Deactivated successfully.
Nov 24 18:59:16 compute-0 podman[296889]: 2025-11-24 18:59:16.357078922 +0000 UTC m=+0.211870221 container remove 1a3e5b7fcdb37f7b48110a8e844299d102693564e3c37f2d9bc664e1c070de66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 18:59:16 compute-0 systemd[1]: libpod-conmon-1a3e5b7fcdb37f7b48110a8e844299d102693564e3c37f2d9bc664e1c070de66.scope: Deactivated successfully.
Nov 24 18:59:16 compute-0 podman[296930]: 2025-11-24 18:59:16.597006069 +0000 UTC m=+0.035715625 container create 51cfc1b5cad5a28054cfabacc264931cf903990ad81c9e6ee6655074d34d19e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:59:16 compute-0 systemd[1]: Started libpod-conmon-51cfc1b5cad5a28054cfabacc264931cf903990ad81c9e6ee6655074d34d19e2.scope.
Nov 24 18:59:16 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:59:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e5dd399732e80633bd42a95da43c27a367b1fc983d35aa0e4945bf535407a35/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:59:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e5dd399732e80633bd42a95da43c27a367b1fc983d35aa0e4945bf535407a35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:59:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e5dd399732e80633bd42a95da43c27a367b1fc983d35aa0e4945bf535407a35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:59:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e5dd399732e80633bd42a95da43c27a367b1fc983d35aa0e4945bf535407a35/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:59:16 compute-0 podman[296930]: 2025-11-24 18:59:16.57946832 +0000 UTC m=+0.018177886 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:59:16 compute-0 podman[296930]: 2025-11-24 18:59:16.678426409 +0000 UTC m=+0.117135995 container init 51cfc1b5cad5a28054cfabacc264931cf903990ad81c9e6ee6655074d34d19e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 18:59:16 compute-0 podman[296930]: 2025-11-24 18:59:16.6870255 +0000 UTC m=+0.125735036 container start 51cfc1b5cad5a28054cfabacc264931cf903990ad81c9e6ee6655074d34d19e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_cannon, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 18:59:16 compute-0 podman[296930]: 2025-11-24 18:59:16.69029646 +0000 UTC m=+0.129006046 container attach 51cfc1b5cad5a28054cfabacc264931cf903990ad81c9e6ee6655074d34d19e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_cannon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:59:17 compute-0 confident_cannon[296947]: {
Nov 24 18:59:17 compute-0 confident_cannon[296947]:     "0": [
Nov 24 18:59:17 compute-0 confident_cannon[296947]:         {
Nov 24 18:59:17 compute-0 confident_cannon[296947]:             "devices": [
Nov 24 18:59:17 compute-0 confident_cannon[296947]:                 "/dev/loop3"
Nov 24 18:59:17 compute-0 confident_cannon[296947]:             ],
Nov 24 18:59:17 compute-0 confident_cannon[296947]:             "lv_name": "ceph_lv0",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:             "lv_size": "21470642176",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:             "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:             "name": "ceph_lv0",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:             "tags": {
Nov 24 18:59:17 compute-0 confident_cannon[296947]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:                 "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:                 "ceph.cluster_name": "ceph",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:                 "ceph.crush_device_class": "",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:                 "ceph.encrypted": "0",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:                 "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:                 "ceph.osd_id": "0",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:                 "ceph.type": "block",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:                 "ceph.vdo": "0"
Nov 24 18:59:17 compute-0 confident_cannon[296947]:             },
Nov 24 18:59:17 compute-0 confident_cannon[296947]:             "type": "block",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:             "vg_name": "ceph_vg0"
Nov 24 18:59:17 compute-0 confident_cannon[296947]:         }
Nov 24 18:59:17 compute-0 confident_cannon[296947]:     ],
Nov 24 18:59:17 compute-0 confident_cannon[296947]:     "1": [
Nov 24 18:59:17 compute-0 confident_cannon[296947]:         {
Nov 24 18:59:17 compute-0 confident_cannon[296947]:             "devices": [
Nov 24 18:59:17 compute-0 confident_cannon[296947]:                 "/dev/loop4"
Nov 24 18:59:17 compute-0 confident_cannon[296947]:             ],
Nov 24 18:59:17 compute-0 confident_cannon[296947]:             "lv_name": "ceph_lv1",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:             "lv_size": "21470642176",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:             "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:             "name": "ceph_lv1",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:             "tags": {
Nov 24 18:59:17 compute-0 confident_cannon[296947]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:                 "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:                 "ceph.cluster_name": "ceph",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:                 "ceph.crush_device_class": "",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:                 "ceph.encrypted": "0",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:                 "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:                 "ceph.osd_id": "1",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:                 "ceph.type": "block",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:                 "ceph.vdo": "0"
Nov 24 18:59:17 compute-0 confident_cannon[296947]:             },
Nov 24 18:59:17 compute-0 confident_cannon[296947]:             "type": "block",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:             "vg_name": "ceph_vg1"
Nov 24 18:59:17 compute-0 confident_cannon[296947]:         }
Nov 24 18:59:17 compute-0 confident_cannon[296947]:     ],
Nov 24 18:59:17 compute-0 confident_cannon[296947]:     "2": [
Nov 24 18:59:17 compute-0 confident_cannon[296947]:         {
Nov 24 18:59:17 compute-0 confident_cannon[296947]:             "devices": [
Nov 24 18:59:17 compute-0 confident_cannon[296947]:                 "/dev/loop5"
Nov 24 18:59:17 compute-0 confident_cannon[296947]:             ],
Nov 24 18:59:17 compute-0 confident_cannon[296947]:             "lv_name": "ceph_lv2",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:             "lv_size": "21470642176",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:             "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:             "name": "ceph_lv2",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:             "tags": {
Nov 24 18:59:17 compute-0 confident_cannon[296947]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:                 "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:                 "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:                 "ceph.cluster_name": "ceph",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:                 "ceph.crush_device_class": "",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:                 "ceph.encrypted": "0",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:                 "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:                 "ceph.osd_id": "2",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:                 "ceph.type": "block",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:                 "ceph.vdo": "0"
Nov 24 18:59:17 compute-0 confident_cannon[296947]:             },
Nov 24 18:59:17 compute-0 confident_cannon[296947]:             "type": "block",
Nov 24 18:59:17 compute-0 confident_cannon[296947]:             "vg_name": "ceph_vg2"
Nov 24 18:59:17 compute-0 confident_cannon[296947]:         }
Nov 24 18:59:17 compute-0 confident_cannon[296947]:     ]
Nov 24 18:59:17 compute-0 confident_cannon[296947]: }
Nov 24 18:59:17 compute-0 systemd[1]: libpod-51cfc1b5cad5a28054cfabacc264931cf903990ad81c9e6ee6655074d34d19e2.scope: Deactivated successfully.
Nov 24 18:59:17 compute-0 podman[296930]: 2025-11-24 18:59:17.409414702 +0000 UTC m=+0.848124268 container died 51cfc1b5cad5a28054cfabacc264931cf903990ad81c9e6ee6655074d34d19e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_cannon, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 18:59:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e5dd399732e80633bd42a95da43c27a367b1fc983d35aa0e4945bf535407a35-merged.mount: Deactivated successfully.
Nov 24 18:59:17 compute-0 podman[296930]: 2025-11-24 18:59:17.46834203 +0000 UTC m=+0.907051576 container remove 51cfc1b5cad5a28054cfabacc264931cf903990ad81c9e6ee6655074d34d19e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_cannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 18:59:17 compute-0 systemd[1]: libpod-conmon-51cfc1b5cad5a28054cfabacc264931cf903990ad81c9e6ee6655074d34d19e2.scope: Deactivated successfully.
Nov 24 18:59:17 compute-0 sudo[296823]: pam_unix(sudo:session): session closed for user root
Nov 24 18:59:17 compute-0 sudo[296967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:59:17 compute-0 sudo[296967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:59:17 compute-0 sudo[296967]: pam_unix(sudo:session): session closed for user root
Nov 24 18:59:17 compute-0 sudo[296992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 18:59:17 compute-0 ceph-mon[74927]: pgmap v1342: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:17 compute-0 sudo[296992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:59:17 compute-0 sudo[296992]: pam_unix(sudo:session): session closed for user root
Nov 24 18:59:17 compute-0 sudo[297017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:59:17 compute-0 sudo[297017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:59:17 compute-0 sudo[297017]: pam_unix(sudo:session): session closed for user root
Nov 24 18:59:17 compute-0 sudo[297042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -- raw list --format json
Nov 24 18:59:17 compute-0 sudo[297042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:59:18 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1343: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:18 compute-0 podman[297108]: 2025-11-24 18:59:18.127662277 +0000 UTC m=+0.041009273 container create 2c93574e3ad3048827097ecc141da926436074fe19660a8c76debe5347b4795c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wright, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 18:59:18 compute-0 systemd[1]: Started libpod-conmon-2c93574e3ad3048827097ecc141da926436074fe19660a8c76debe5347b4795c.scope.
Nov 24 18:59:18 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:59:18 compute-0 podman[297108]: 2025-11-24 18:59:18.108237178 +0000 UTC m=+0.021584194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:59:18 compute-0 podman[297108]: 2025-11-24 18:59:18.222312882 +0000 UTC m=+0.135659958 container init 2c93574e3ad3048827097ecc141da926436074fe19660a8c76debe5347b4795c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wright, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 18:59:18 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:59:18 compute-0 podman[297108]: 2025-11-24 18:59:18.236022721 +0000 UTC m=+0.149369697 container start 2c93574e3ad3048827097ecc141da926436074fe19660a8c76debe5347b4795c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 18:59:18 compute-0 podman[297108]: 2025-11-24 18:59:18.239435335 +0000 UTC m=+0.152782351 container attach 2c93574e3ad3048827097ecc141da926436074fe19660a8c76debe5347b4795c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 24 18:59:18 compute-0 vigorous_wright[297124]: 167 167
Nov 24 18:59:18 compute-0 podman[297108]: 2025-11-24 18:59:18.243054984 +0000 UTC m=+0.156402000 container died 2c93574e3ad3048827097ecc141da926436074fe19660a8c76debe5347b4795c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wright, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 18:59:18 compute-0 systemd[1]: libpod-2c93574e3ad3048827097ecc141da926436074fe19660a8c76debe5347b4795c.scope: Deactivated successfully.
Nov 24 18:59:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-973632bcda735c76b2d9b352e9c7f2f559efc05c1ad8b64fc3e9e440e51b7f1a-merged.mount: Deactivated successfully.
Nov 24 18:59:18 compute-0 podman[297108]: 2025-11-24 18:59:18.288250009 +0000 UTC m=+0.201596985 container remove 2c93574e3ad3048827097ecc141da926436074fe19660a8c76debe5347b4795c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 18:59:18 compute-0 systemd[1]: libpod-conmon-2c93574e3ad3048827097ecc141da926436074fe19660a8c76debe5347b4795c.scope: Deactivated successfully.
Nov 24 18:59:18 compute-0 podman[297149]: 2025-11-24 18:59:18.463614376 +0000 UTC m=+0.039349332 container create 2037551e3baa025b0c6a4f6b15037cfb534da03b0ba0c96a0a00b70cfdd41b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bardeen, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Nov 24 18:59:18 compute-0 systemd[1]: Started libpod-conmon-2037551e3baa025b0c6a4f6b15037cfb534da03b0ba0c96a0a00b70cfdd41b53.scope.
Nov 24 18:59:18 compute-0 systemd[1]: Started libcrun container.
Nov 24 18:59:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e6cf385473b39befbea846533302760ef1ca895fc832108c41da4a460ecfdeb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 18:59:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e6cf385473b39befbea846533302760ef1ca895fc832108c41da4a460ecfdeb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 18:59:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e6cf385473b39befbea846533302760ef1ca895fc832108c41da4a460ecfdeb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 18:59:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e6cf385473b39befbea846533302760ef1ca895fc832108c41da4a460ecfdeb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 18:59:18 compute-0 podman[297149]: 2025-11-24 18:59:18.536022283 +0000 UTC m=+0.111757259 container init 2037551e3baa025b0c6a4f6b15037cfb534da03b0ba0c96a0a00b70cfdd41b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 18:59:18 compute-0 podman[297149]: 2025-11-24 18:59:18.445834367 +0000 UTC m=+0.021569353 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 18:59:18 compute-0 podman[297149]: 2025-11-24 18:59:18.543829135 +0000 UTC m=+0.119564091 container start 2037551e3baa025b0c6a4f6b15037cfb534da03b0ba0c96a0a00b70cfdd41b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bardeen, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 18:59:18 compute-0 podman[297149]: 2025-11-24 18:59:18.54687531 +0000 UTC m=+0.122610276 container attach 2037551e3baa025b0c6a4f6b15037cfb534da03b0ba0c96a0a00b70cfdd41b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bardeen, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 24 18:59:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 18:59:19 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/13639317' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 18:59:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 18:59:19 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/13639317' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 18:59:19 compute-0 trusting_bardeen[297165]: {
Nov 24 18:59:19 compute-0 trusting_bardeen[297165]:     "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 18:59:19 compute-0 trusting_bardeen[297165]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:59:19 compute-0 trusting_bardeen[297165]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 18:59:19 compute-0 trusting_bardeen[297165]:         "osd_id": 0,
Nov 24 18:59:19 compute-0 trusting_bardeen[297165]:         "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 18:59:19 compute-0 trusting_bardeen[297165]:         "type": "bluestore"
Nov 24 18:59:19 compute-0 trusting_bardeen[297165]:     },
Nov 24 18:59:19 compute-0 trusting_bardeen[297165]:     "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 18:59:19 compute-0 trusting_bardeen[297165]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:59:19 compute-0 trusting_bardeen[297165]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 18:59:19 compute-0 trusting_bardeen[297165]:         "osd_id": 1,
Nov 24 18:59:19 compute-0 trusting_bardeen[297165]:         "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 18:59:19 compute-0 trusting_bardeen[297165]:         "type": "bluestore"
Nov 24 18:59:19 compute-0 trusting_bardeen[297165]:     },
Nov 24 18:59:19 compute-0 trusting_bardeen[297165]:     "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 18:59:19 compute-0 trusting_bardeen[297165]:         "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 18:59:19 compute-0 trusting_bardeen[297165]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 18:59:19 compute-0 trusting_bardeen[297165]:         "osd_id": 2,
Nov 24 18:59:19 compute-0 trusting_bardeen[297165]:         "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 18:59:19 compute-0 trusting_bardeen[297165]:         "type": "bluestore"
Nov 24 18:59:19 compute-0 trusting_bardeen[297165]:     }
Nov 24 18:59:19 compute-0 trusting_bardeen[297165]: }
Nov 24 18:59:19 compute-0 nova_compute[270693]: 2025-11-24 18:59:19.540 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:59:19 compute-0 nova_compute[270693]: 2025-11-24 18:59:19.542 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:59:19 compute-0 nova_compute[270693]: 2025-11-24 18:59:19.569 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:59:19 compute-0 nova_compute[270693]: 2025-11-24 18:59:19.570 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:59:19 compute-0 nova_compute[270693]: 2025-11-24 18:59:19.570 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:59:19 compute-0 nova_compute[270693]: 2025-11-24 18:59:19.570 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 18:59:19 compute-0 nova_compute[270693]: 2025-11-24 18:59:19.571 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:59:19 compute-0 systemd[1]: libpod-2037551e3baa025b0c6a4f6b15037cfb534da03b0ba0c96a0a00b70cfdd41b53.scope: Deactivated successfully.
Nov 24 18:59:19 compute-0 systemd[1]: libpod-2037551e3baa025b0c6a4f6b15037cfb534da03b0ba0c96a0a00b70cfdd41b53.scope: Consumed 1.033s CPU time.
Nov 24 18:59:19 compute-0 podman[297149]: 2025-11-24 18:59:19.572081105 +0000 UTC m=+1.147816061 container died 2037551e3baa025b0c6a4f6b15037cfb534da03b0ba0c96a0a00b70cfdd41b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 18:59:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e6cf385473b39befbea846533302760ef1ca895fc832108c41da4a460ecfdeb-merged.mount: Deactivated successfully.
Nov 24 18:59:19 compute-0 ceph-mon[74927]: pgmap v1343: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:19 compute-0 ceph-mon[74927]: from='client.? 192.168.122.10:0/13639317' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 18:59:19 compute-0 ceph-mon[74927]: from='client.? 192.168.122.10:0/13639317' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 18:59:19 compute-0 podman[297149]: 2025-11-24 18:59:19.641697363 +0000 UTC m=+1.217432329 container remove 2037551e3baa025b0c6a4f6b15037cfb534da03b0ba0c96a0a00b70cfdd41b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bardeen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 24 18:59:19 compute-0 systemd[1]: libpod-conmon-2037551e3baa025b0c6a4f6b15037cfb534da03b0ba0c96a0a00b70cfdd41b53.scope: Deactivated successfully.
Nov 24 18:59:19 compute-0 sudo[297042]: pam_unix(sudo:session): session closed for user root
Nov 24 18:59:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 18:59:19 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:59:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 18:59:19 compute-0 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:59:19 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 4cacfeab-dfbd-4d35-9054-a98f6c3015e1 does not exist
Nov 24 18:59:19 compute-0 ceph-mgr[75218]: [progress WARNING root] complete: ev 81e19dc3-132f-44ad-9049-8bb7e57b6e4c does not exist
Nov 24 18:59:19 compute-0 sudo[297232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 18:59:19 compute-0 sudo[297232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:59:19 compute-0 sudo[297232]: pam_unix(sudo:session): session closed for user root
Nov 24 18:59:19 compute-0 sudo[297257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 18:59:19 compute-0 sudo[297257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 18:59:19 compute-0 sudo[297257]: pam_unix(sudo:session): session closed for user root
Nov 24 18:59:19 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 18:59:19 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2596783831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:59:19 compute-0 nova_compute[270693]: 2025-11-24 18:59:19.994 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:59:20 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1344: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:20 compute-0 nova_compute[270693]: 2025-11-24 18:59:20.123 270697 WARNING nova.virt.libvirt.driver [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 18:59:20 compute-0 nova_compute[270693]: 2025-11-24 18:59:20.124 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4972MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 18:59:20 compute-0 nova_compute[270693]: 2025-11-24 18:59:20.124 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:59:20 compute-0 nova_compute[270693]: 2025-11-24 18:59:20.125 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:59:20 compute-0 nova_compute[270693]: 2025-11-24 18:59:20.291 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 18:59:20 compute-0 nova_compute[270693]: 2025-11-24 18:59:20.291 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 18:59:20 compute-0 nova_compute[270693]: 2025-11-24 18:59:20.386 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Refreshing inventories for resource provider d1cce7ec-de83-4810-91f8-1852891da8a6 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 24 18:59:20 compute-0 nova_compute[270693]: 2025-11-24 18:59:20.466 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Updating ProviderTree inventory for provider d1cce7ec-de83-4810-91f8-1852891da8a6 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 24 18:59:20 compute-0 nova_compute[270693]: 2025-11-24 18:59:20.467 270697 DEBUG nova.compute.provider_tree [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Updating inventory in ProviderTree for provider d1cce7ec-de83-4810-91f8-1852891da8a6 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 18:59:20 compute-0 nova_compute[270693]: 2025-11-24 18:59:20.484 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Refreshing aggregate associations for resource provider d1cce7ec-de83-4810-91f8-1852891da8a6, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 24 18:59:20 compute-0 nova_compute[270693]: 2025-11-24 18:59:20.509 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Refreshing trait associations for resource provider d1cce7ec-de83-4810-91f8-1852891da8a6, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_ACCELERATORS,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_MMX,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_FMA3,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE4A,HW_CPU_X86_F16C,HW_CPU_X86_SSE2,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_E1000 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 24 18:59:20 compute-0 nova_compute[270693]: 2025-11-24 18:59:20.529 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 18:59:20 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:59:20 compute-0 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 18:59:20 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2596783831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:59:20 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 18:59:20 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/600938882' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:59:20 compute-0 nova_compute[270693]: 2025-11-24 18:59:20.960 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 18:59:20 compute-0 nova_compute[270693]: 2025-11-24 18:59:20.967 270697 DEBUG nova.compute.provider_tree [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed in ProviderTree for provider: d1cce7ec-de83-4810-91f8-1852891da8a6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 18:59:20 compute-0 podman[297304]: 2025-11-24 18:59:20.991143016 +0000 UTC m=+0.078873296 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 18:59:20 compute-0 podman[297306]: 2025-11-24 18:59:20.992122751 +0000 UTC m=+0.068735096 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 24 18:59:21 compute-0 podman[297305]: 2025-11-24 18:59:21.022731126 +0000 UTC m=+0.107658736 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 24 18:59:21 compute-0 nova_compute[270693]: 2025-11-24 18:59:21.025 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed for provider d1cce7ec-de83-4810-91f8-1852891da8a6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 18:59:21 compute-0 nova_compute[270693]: 2025-11-24 18:59:21.026 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 18:59:21 compute-0 nova_compute[270693]: 2025-11-24 18:59:21.026 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:59:21 compute-0 ceph-mon[74927]: pgmap v1344: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:21 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/600938882' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 18:59:22 compute-0 nova_compute[270693]: 2025-11-24 18:59:22.014 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:59:22 compute-0 nova_compute[270693]: 2025-11-24 18:59:22.032 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:59:22 compute-0 nova_compute[270693]: 2025-11-24 18:59:22.032 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 18:59:22 compute-0 nova_compute[270693]: 2025-11-24 18:59:22.033 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 18:59:22 compute-0 nova_compute[270693]: 2025-11-24 18:59:22.045 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 18:59:22 compute-0 nova_compute[270693]: 2025-11-24 18:59:22.046 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:59:22 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1345: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:22 compute-0 nova_compute[270693]: 2025-11-24 18:59:22.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:59:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:59:22.757 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 18:59:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:59:22.758 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 18:59:22 compute-0 ovn_metadata_agent[179758]: 2025-11-24 18:59:22.758 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 18:59:23 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:59:23 compute-0 ceph-mon[74927]: pgmap v1345: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:24 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1346: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:24 compute-0 nova_compute[270693]: 2025-11-24 18:59:24.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:59:24 compute-0 nova_compute[270693]: 2025-11-24 18:59:24.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:59:24 compute-0 nova_compute[270693]: 2025-11-24 18:59:24.529 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 18:59:25 compute-0 ceph-mon[74927]: pgmap v1346: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:26 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1347: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:26 compute-0 nova_compute[270693]: 2025-11-24 18:59:26.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:59:26 compute-0 nova_compute[270693]: 2025-11-24 18:59:26.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 18:59:26 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Nov 24 18:59:26 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:59:26.759106) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 18:59:26 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Nov 24 18:59:26 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010766759143, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2057, "num_deletes": 251, "total_data_size": 3522424, "memory_usage": 3586336, "flush_reason": "Manual Compaction"}
Nov 24 18:59:26 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Nov 24 18:59:26 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010766802393, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3456683, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25890, "largest_seqno": 27946, "table_properties": {"data_size": 3447104, "index_size": 6137, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18750, "raw_average_key_size": 20, "raw_value_size": 3428279, "raw_average_value_size": 3678, "num_data_blocks": 272, "num_entries": 932, "num_filter_entries": 932, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764010532, "oldest_key_time": 1764010532, "file_creation_time": 1764010766, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Nov 24 18:59:26 compute-0 ceph-mon[74927]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 43329 microseconds, and 9272 cpu microseconds.
Nov 24 18:59:26 compute-0 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 18:59:26 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:59:26.802434) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3456683 bytes OK
Nov 24 18:59:26 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:59:26.802451) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Nov 24 18:59:26 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:59:26.808930) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Nov 24 18:59:26 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:59:26.808947) EVENT_LOG_v1 {"time_micros": 1764010766808942, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 18:59:26 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:59:26.808964) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 18:59:26 compute-0 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3513798, prev total WAL file size 3513798, number of live WAL files 2.
Nov 24 18:59:26 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:59:26 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:59:26.809980) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Nov 24 18:59:26 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 18:59:26 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3375KB)], [59(7255KB)]
Nov 24 18:59:26 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010766810015, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 10885885, "oldest_snapshot_seqno": -1}
Nov 24 18:59:26 compute-0 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5247 keys, 9092319 bytes, temperature: kUnknown
Nov 24 18:59:26 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010766853501, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 9092319, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9055541, "index_size": 22588, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13125, "raw_key_size": 130647, "raw_average_key_size": 24, "raw_value_size": 8958964, "raw_average_value_size": 1707, "num_data_blocks": 931, "num_entries": 5247, "num_filter_entries": 5247, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008324, "oldest_key_time": 0, "file_creation_time": 1764010766, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Nov 24 18:59:26 compute-0 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 18:59:26 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:59:26.853758) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 9092319 bytes
Nov 24 18:59:26 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:59:26.855096) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 249.9 rd, 208.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.1 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(5.8) write-amplify(2.6) OK, records in: 5761, records dropped: 514 output_compression: NoCompression
Nov 24 18:59:26 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:59:26.855116) EVENT_LOG_v1 {"time_micros": 1764010766855107, "job": 32, "event": "compaction_finished", "compaction_time_micros": 43553, "compaction_time_cpu_micros": 22702, "output_level": 6, "num_output_files": 1, "total_output_size": 9092319, "num_input_records": 5761, "num_output_records": 5247, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 18:59:26 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:59:26 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010766856003, "job": 32, "event": "table_file_deletion", "file_number": 61}
Nov 24 18:59:26 compute-0 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 18:59:26 compute-0 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010766857740, "job": 32, "event": "table_file_deletion", "file_number": 59}
Nov 24 18:59:26 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:59:26.809854) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:59:26 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:59:26.857792) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:59:26 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:59:26.857797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:59:26 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:59:26.857799) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:59:26 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:59:26.857801) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:59:26 compute-0 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:59:26.857802) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 18:59:27 compute-0 ceph-mon[74927]: pgmap v1347: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:28 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1348: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:28 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:59:29 compute-0 sshd-session[297366]: Accepted publickey for zuul from 192.168.122.10 port 43282 ssh2: ECDSA SHA256:jrbRhTO0vQ5011EeK1ZbrK2vK+fcQyIDL9kUBqDHBYY
Nov 24 18:59:29 compute-0 systemd-logind[822]: New session 57 of user zuul.
Nov 24 18:59:29 compute-0 systemd[1]: Started Session 57 of User zuul.
Nov 24 18:59:29 compute-0 sshd-session[297366]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 18:59:29 compute-0 sudo[297370]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Nov 24 18:59:29 compute-0 sudo[297370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 18:59:29 compute-0 ceph-mon[74927]: pgmap v1348: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:30 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1349: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:31 compute-0 ceph-mon[74927]: pgmap v1349: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:32 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15049 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:32 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1350: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:32 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15051 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:33 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 24 18:59:33 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2279853532' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 18:59:33 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:59:33 compute-0 ceph-mon[74927]: from='client.15049 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:33 compute-0 ceph-mon[74927]: pgmap v1350: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:33 compute-0 ceph-mon[74927]: from='client.15051 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:33 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2279853532' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 18:59:34 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1351: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:59:34
Nov 24 18:59:34 compute-0 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 18:59:34 compute-0 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 18:59:34 compute-0 ceph-mgr[75218]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'vms', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', 'volumes', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control']
Nov 24 18:59:34 compute-0 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 18:59:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:59:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:59:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:59:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:59:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 18:59:34 compute-0 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 18:59:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 18:59:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:59:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 18:59:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 18:59:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:59:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:59:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 18:59:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 18:59:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:59:34 compute-0 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 18:59:35 compute-0 ovs-vsctl[297654]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 24 18:59:35 compute-0 ceph-mon[74927]: pgmap v1351: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:36 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1352: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:36 compute-0 virtqemud[270425]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 24 18:59:36 compute-0 virtqemud[270425]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 24 18:59:36 compute-0 virtqemud[270425]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 24 18:59:37 compute-0 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: cache status {prefix=cache status} (starting...)
Nov 24 18:59:37 compute-0 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: client ls {prefix=client ls} (starting...)
Nov 24 18:59:37 compute-0 lvm[298019]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 24 18:59:37 compute-0 lvm[298019]: VG ceph_vg2 finished
Nov 24 18:59:37 compute-0 lvm[298022]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 18:59:37 compute-0 lvm[298022]: VG ceph_vg0 finished
Nov 24 18:59:37 compute-0 lvm[298027]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 24 18:59:37 compute-0 lvm[298027]: VG ceph_vg1 finished
Nov 24 18:59:37 compute-0 ceph-mon[74927]: pgmap v1352: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:37 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15055 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:38 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1353: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:38 compute-0 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: damage ls {prefix=damage ls} (starting...)
Nov 24 18:59:38 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15057 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:38 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:59:38 compute-0 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: dump loads {prefix=dump loads} (starting...)
Nov 24 18:59:38 compute-0 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 24 18:59:38 compute-0 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 24 18:59:38 compute-0 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 24 18:59:38 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 24 18:59:38 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2224797317' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 24 18:59:38 compute-0 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 24 18:59:38 compute-0 ceph-mon[74927]: from='client.15055 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:38 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2224797317' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 24 18:59:38 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15063 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:38 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:59:38.934+0000 7f6377bb5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 24 18:59:38 compute-0 ceph-mgr[75218]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 24 18:59:38 compute-0 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 24 18:59:38 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 18:59:38 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1715020211' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:59:39 compute-0 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 24 18:59:39 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 24 18:59:39 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3820292158' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 24 18:59:39 compute-0 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: ops {prefix=ops} (starting...)
Nov 24 18:59:39 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 24 18:59:39 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2861406773' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 24 18:59:39 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 24 18:59:39 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1229367238' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 24 18:59:39 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 24 18:59:39 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/278316039' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 18:59:39 compute-0 ceph-mon[74927]: pgmap v1353: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:39 compute-0 ceph-mon[74927]: from='client.15057 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:39 compute-0 ceph-mon[74927]: from='client.15063 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:39 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1715020211' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 18:59:39 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3820292158' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 24 18:59:39 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2861406773' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 24 18:59:39 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1229367238' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 24 18:59:39 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/278316039' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 18:59:39 compute-0 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: session ls {prefix=session ls} (starting...)
Nov 24 18:59:40 compute-0 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: status {prefix=status} (starting...)
Nov 24 18:59:40 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15075 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:40 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1354: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:40 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 24 18:59:40 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2241713775' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 18:59:40 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15079 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:40 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 24 18:59:40 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2005516369' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 18:59:40 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2241713775' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 18:59:40 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2005516369' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 18:59:40 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 24 18:59:40 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2172093607' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 24 18:59:41 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 24 18:59:41 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1521262031' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 18:59:41 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 24 18:59:41 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2988803983' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 24 18:59:41 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 24 18:59:41 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3890587240' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 24 18:59:41 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15091 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:41 compute-0 ceph-mgr[75218]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 24 18:59:41 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:59:41.644+0000 7f6377bb5640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 24 18:59:41 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 24 18:59:41 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2341368656' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 18:59:41 compute-0 ceph-mon[74927]: from='client.15075 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:41 compute-0 ceph-mon[74927]: pgmap v1354: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:41 compute-0 ceph-mon[74927]: from='client.15079 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:41 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2172093607' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 24 18:59:41 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1521262031' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 18:59:41 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2988803983' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 24 18:59:41 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3890587240' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 24 18:59:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 24 18:59:42 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/691508376' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 24 18:59:42 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1355: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:42 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15097 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 24 18:59:42 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3315380950' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 24 18:59:42 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15101 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:42 compute-0 ceph-mon[74927]: from='client.15091 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:42 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2341368656' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 18:59:42 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/691508376' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 24 18:59:42 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3315380950' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 24 18:59:42 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15105 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:42 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 24 18:59:42 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3075409262' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 18:59:43 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:13.718381+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66142208 unmapped: 909312 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:14.718631+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66142208 unmapped: 909312 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:15.718763+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 901120 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:16.718975+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 901120 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:17.719101+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 901120 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:18.719296+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 892928 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:19.719401+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 892928 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:20.719516+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 884736 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:21.719820+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 884736 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:22.720023+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 884736 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:23.720157+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 884736 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:24.720282+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 876544 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:25.720392+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 876544 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:26.720495+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 868352 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:27.720598+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 868352 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:28.720814+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 860160 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:29.720955+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 860160 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:30.721069+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 860160 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:31.721207+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 851968 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:32.721350+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 851968 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:33.721457+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 851968 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:34.721636+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 843776 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:35.721770+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 843776 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:36.721927+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 843776 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:37.722049+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 835584 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:38.722203+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 835584 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:39.722322+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 835584 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:40.722487+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66224128 unmapped: 827392 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:41.722652+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66224128 unmapped: 827392 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:42.723024+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66232320 unmapped: 819200 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:43.723170+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66240512 unmapped: 811008 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:44.723328+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 802816 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:45.723450+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 802816 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:46.723566+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 802816 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:47.723741+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66256896 unmapped: 794624 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:48.723854+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66256896 unmapped: 794624 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:49.723999+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66256896 unmapped: 794624 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:50.724122+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 786432 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:51.724286+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 786432 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:52.724681+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 786432 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:53.724834+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 778240 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:54.724999+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 778240 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:55.725165+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66281472 unmapped: 770048 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:56.725324+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66281472 unmapped: 770048 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:57.725432+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66281472 unmapped: 770048 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:58.725560+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 761856 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:59.725658+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 761856 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:00.725771+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 761856 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:01.725920+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 761856 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:02.726067+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 761856 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:03.726165+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 753664 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:04.726265+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 761856 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:05.726343+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 761856 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:06.726450+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 753664 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:07.726578+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 753664 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:08.726726+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 745472 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:09.726830+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 745472 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:10.727137+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 745472 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:11.727262+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 737280 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:12.727403+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 737280 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:13.727537+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 729088 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:14.727665+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 729088 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:15.727778+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 729088 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:16.727947+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 729088 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:17.728080+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 720896 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:18.728235+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 720896 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:19.728375+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 712704 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:20.728489+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 712704 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:21.728640+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 712704 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:22.728792+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 704512 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:23.728991+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 704512 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:24.729119+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 696320 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:25.729242+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 696320 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:26.729359+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 696320 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:27.729483+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66363392 unmapped: 688128 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:28.729634+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66363392 unmapped: 688128 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:29.729845+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 679936 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:30.729990+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 679936 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:31.730137+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 679936 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:32.730296+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 671744 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:33.730410+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 671744 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:34.730525+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 655360 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:35.730652+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 655360 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:36.730776+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 655360 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:37.730961+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66404352 unmapped: 647168 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:38.731162+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66404352 unmapped: 647168 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:39.731285+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 638976 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:40.731408+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 638976 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:41.731558+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 638976 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:42.731722+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 630784 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:43.731885+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 630784 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:44.732071+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 630784 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:45.732201+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 622592 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:46.732327+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 622592 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:47.732489+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 630784 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:48.732591+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 622592 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:49.732745+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 614400 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:50.732990+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 606208 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:51.733120+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 606208 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:52.733328+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 606208 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:53.733449+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 598016 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:54.733645+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 598016 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:55.733821+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 589824 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:56.733946+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 589824 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:57.734194+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 589824 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:58.734316+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 581632 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:59.734451+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 581632 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:00.734571+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 581632 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:01.734705+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 573440 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:02.734950+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 573440 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:03.735097+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 565248 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:04.735411+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 557056 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:05.735608+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 557056 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:06.735928+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 548864 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:07.736209+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 548864 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:08.736594+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:09.738048+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 540672 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:10.738383+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 540672 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:11.739312+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 540672 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:12.739601+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 532480 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:13.739775+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 532480 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:14.740073+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 532480 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:15.740214+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 532480 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:16.740395+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 524288 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:17.740606+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 524288 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:18.740734+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 516096 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:19.740891+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 516096 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:20.741109+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 507904 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:21.741273+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 507904 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:22.741472+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 507904 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:23.741620+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 499712 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:24.741814+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 499712 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:25.742034+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 499712 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:26.742197+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 491520 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:27.742367+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 491520 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:28.742498+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 483328 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:29.742624+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 483328 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:30.742813+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 475136 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:31.742961+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 475136 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:32.743158+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 475136 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:33.743294+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 466944 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:34.743484+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 466944 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:35.743642+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 466944 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:36.743850+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 466944 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:37.743971+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 458752 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:38.744103+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 458752 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:39.744239+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 450560 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:40.744361+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 450560 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:41.744500+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 442368 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:42.744657+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 442368 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:43.744791+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 442368 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:44.744930+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 434176 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:45.745035+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 434176 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:46.745168+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 425984 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:47.745312+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 425984 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:48.745598+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 417792 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:49.745726+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 409600 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:50.745871+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 409600 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:51.746031+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 409600 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:52.746217+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 401408 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:53.746333+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 401408 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:54.746442+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 401408 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:55.746573+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 393216 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:56.746701+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 385024 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:57.746814+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 385024 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:58.746972+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 376832 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:59.747157+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 376832 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:00.747443+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 376832 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:01.747542+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 368640 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:02.747713+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 368640 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:03.747886+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 360448 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:04.748048+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 360448 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:05.748213+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 352256 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:06.748336+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 344064 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:07.748478+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 344064 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:08.748652+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 335872 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:09.748928+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 335872 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:10.749142+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 335872 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:11.749303+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 327680 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:12.749824+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 327680 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:13.750006+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 319488 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:14.750186+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 319488 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:15.750348+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 319488 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:16.750469+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 311296 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:17.750763+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 311296 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:18.751045+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 303104 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:19.751221+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 303104 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:20.751366+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 303104 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:21.751598+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 294912 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:22.751755+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 294912 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:23.751964+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 294912 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:24.752106+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 286720 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:25.752276+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66772992 unmapped: 278528 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:26.752443+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 270336 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:27.752635+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 270336 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:28.752968+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 270336 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:29.753102+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 262144 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:30.753222+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 262144 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:31.753343+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 262144 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:32.753503+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 253952 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:33.753654+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 253952 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:34.753781+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 253952 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:35.753927+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 245760 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:36.754056+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 245760 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:37.754176+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 245760 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:38.754301+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 237568 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:39.754468+0000)
Nov 24 18:59:43 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15107 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 237568 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:40.754614+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 229376 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:41.754764+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 229376 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:42.755041+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 221184 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:43.755264+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 221184 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:44.755507+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 221184 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:45.755734+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 212992 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:46.755967+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 212992 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:47.756196+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 212992 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:48.756332+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 204800 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:49.756453+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 204800 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:50.756614+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 204800 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:51.756765+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 196608 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:52.756924+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 196608 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:53.757071+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 188416 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:54.757204+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 188416 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:55.757341+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 180224 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:56.757458+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 180224 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:57.757589+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 180224 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:58.757747+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 172032 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:59.757868+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 172032 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:00.757993+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 172032 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:01.758139+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 163840 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:02.758309+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 163840 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:03.758434+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 155648 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:04.758606+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 155648 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:05.758775+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 155648 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:06.758939+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 147456 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:07.759100+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 147456 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:08.759224+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 131072 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:09.765489+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 131072 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:10.765643+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 131072 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:11.765794+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 131072 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:12.765969+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 122880 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:13.766393+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 131072 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:14.767357+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 122880 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:15.768018+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 122880 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:16.768272+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 122880 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:17.768998+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Cumulative writes: 5482 writes, 23K keys, 5482 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5482 writes, 769 syncs, 7.13 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5482 writes, 23K keys, 5482 commit groups, 1.0 writes per commit group, ingest: 18.33 MB, 0.03 MB/s
                                           Interval WAL: 5482 writes, 769 syncs, 7.13 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.03              0.00         1    0.027       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92a430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92a430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92a430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.019       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 57344 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:18.769370+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 57344 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:19.769490+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 49152 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:20.769714+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 49152 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:21.770049+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 49152 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:22.770357+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 40960 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:23.770628+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 40960 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:24.770851+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 40960 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:25.771042+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 32768 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:26.771192+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 32768 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:27.771331+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 24576 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:28.771465+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 24576 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:29.771602+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 24576 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:30.771747+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 16384 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:31.771881+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 16384 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:32.772321+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 16384 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:33.772493+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 8192 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:34.772747+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 8192 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:35.772954+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 8192 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:36.773170+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 0 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:37.773299+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 0 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:38.773501+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 1040384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:39.773624+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 1040384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:40.773765+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 1040384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:41.773912+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 1032192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:42.774045+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 1032192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:43.774165+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 1024000 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:44.774303+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 1024000 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:45.774428+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1015808 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:46.774547+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1015808 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:47.774672+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1015808 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:48.774804+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 1007616 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:49.774964+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 1007616 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:50.775099+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 999424 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:51.775166+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 999424 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:52.775244+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 999424 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:53.775379+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 999424 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:54.775563+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 991232 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:55.775711+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 991232 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:56.775841+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 983040 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:57.775973+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 983040 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:58.776158+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 983040 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:59.776298+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 974848 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:00.776406+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 974848 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:01.776537+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 966656 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:02.776734+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 966656 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:03.776932+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 966656 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:04.777058+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 950272 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:05.777127+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 950272 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:06.777258+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 950272 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:07.777388+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 933888 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:08.777512+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 933888 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:09.777678+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 925696 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:10.777790+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 925696 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:11.777980+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 925696 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:12.778179+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 917504 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:13.778322+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 917504 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:14.778464+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 917504 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:15.778797+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 909312 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:16.778974+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 909312 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:17.779087+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 909312 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:18.779196+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 901120 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:19.779280+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 901120 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:20.779467+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 901120 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:21.779682+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 892928 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:22.779888+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 892928 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:23.780064+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 892928 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:24.780200+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 884736 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:25.780322+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 884736 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:26.780471+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 876544 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:27.780588+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 876544 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:28.780709+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 876544 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:29.780846+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 328.941497803s of 328.962036133s, submitted: 6
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 499712 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:30.780958+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 270336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:31.781087+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 270336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:32.781268+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 270336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:33.781392+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 270336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:34.781502+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 270336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:35.781612+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67837952 unmapped: 262144 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:36.781736+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67837952 unmapped: 262144 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:37.781847+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67837952 unmapped: 262144 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:38.781978+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:39.782086+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:40.782200+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 245760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:41.782320+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 245760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:42.782449+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67862528 unmapped: 237568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:43.782586+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67862528 unmapped: 237568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:44.782740+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67862528 unmapped: 237568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:45.782870+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 229376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:46.782996+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 229376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:47.783114+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 221184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:48.783231+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 221184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:49.783346+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 221184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:50.783495+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 212992 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:51.783648+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 212992 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:52.783811+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 204800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:53.783994+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 204800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:54.784144+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 204800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:55.784274+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 196608 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:56.784464+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 196608 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:57.784623+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 188416 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:58.784778+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 188416 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:59.784978+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 180224 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:00.785153+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 180224 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:01.785282+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 172032 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:02.785465+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 172032 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:03.785580+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 163840 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:04.785710+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 147456 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:05.785820+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 147456 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:06.785947+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 139264 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:07.786068+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 139264 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:08.786193+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 139264 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:09.786325+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 122880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:10.786456+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 122880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:11.786593+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 114688 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:12.786766+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 114688 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:13.786909+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 106496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:14.787055+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 106496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:15.787190+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 106496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:16.787307+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 106496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:17.787415+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:18.787570+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:19.787693+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:20.787879+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:21.788084+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:22.788275+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:23.788431+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:24.788609+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:25.788868+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:26.789068+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:27.789201+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:28.789316+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:29.789470+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:30.789610+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:31.789744+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:32.789960+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:33.790102+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:34.790265+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 90112 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:35.790523+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:36.790727+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:37.790937+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:38.791123+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:39.791251+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:40.791383+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:41.791541+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:42.791706+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:43.791836+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:44.792029+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:45.792167+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:46.792281+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:47.792392+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:48.792558+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:49.792723+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:50.792888+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:51.793046+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:52.793206+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:53.793437+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 73728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:54.793592+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 73728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:55.793737+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 73728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:56.793890+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 73728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:57.794064+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:58.794298+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:59.794403+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:00.794499+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:01.794598+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:02.794786+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:03.794940+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:04.795054+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:05.795187+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:06.795360+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:07.795541+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:08.795658+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:09.795822+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:10.795994+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:11.796211+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:12.796401+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:13.796523+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:14.796646+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:15.796820+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:16.797014+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:17.797130+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:18.797309+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:19.797443+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:20.797583+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:21.797720+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:22.797935+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:23.798081+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:24.798210+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:25.798326+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:26.798434+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:27.798573+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:28.798713+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:29.798836+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:30.798976+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:31.799155+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:32.799339+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:33.799454+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:34.799654+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:35.799792+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:36.799907+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:37.800008+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:38.800146+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:39.800274+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:40.800505+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:41.800609+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:42.800921+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:43.801098+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:44.801262+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:45.801432+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:46.801680+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:47.801801+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:48.801931+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:49.802067+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:50.802204+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:51.802328+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:52.802488+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:53.802641+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:54.802811+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:55.802999+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:56.803163+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:57.803282+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:58.803439+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:59.803596+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:00.803758+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 40960 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:01.803928+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 40960 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:02.804111+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 40960 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:03.804248+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 40960 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:04.804428+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:05.804591+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:06.804738+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:07.804876+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:08.805045+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:09.805167+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:10.805311+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:11.805431+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:12.805557+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:13.805742+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:14.805928+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:15.806126+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:16.806326+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:17.806453+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:18.806608+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:19.806798+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:20.806988+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:21.807177+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:22.807383+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:23.807523+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:24.807712+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:25.807853+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:26.808009+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:27.808118+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:28.808262+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:29.808421+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:30.808544+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:31.808996+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:32.809157+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:33.809280+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:34.809392+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:35.809535+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:36.809654+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:37.809763+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:38.809880+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:39.810031+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:40.810147+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:41.810253+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:42.810380+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:43.810494+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:44.810639+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:45.810783+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:46.811112+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:47.811247+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:48.811367+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:49.811529+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:50.811639+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:51.811783+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:52.812005+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:53.812163+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:54.812295+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:55.812421+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:56.812581+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:57.812710+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:58.812822+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:59.812950+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:00.813087+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:01.813205+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:02.813844+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:03.814007+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:04.814160+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 1089536 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:05.814286+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:06.814441+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:07.814555+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:08.814870+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:09.815076+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:10.815219+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:11.815343+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:12.815485+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:13.815632+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:14.815761+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:15.815882+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:16.816019+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:17.816144+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:18.816255+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:19.816369+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:20.816474+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:21.816809+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:22.816981+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:23.817182+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:24.817324+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:25.817454+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:26.817579+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:27.817707+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:28.817819+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:29.817956+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:30.818066+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:31.818195+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:32.818340+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:33.818471+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:34.818613+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:35.818733+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:36.818931+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:37.819081+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:38.819198+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:39.819315+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:40.819575+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:41.819727+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:42.819980+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:43.820098+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:44.820230+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:45.820378+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:46.820557+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:47.820700+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:48.820875+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:49.821048+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:50.821184+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:51.821356+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:52.821523+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:53.821633+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:54.821770+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:55.821963+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:56.822097+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:57.822227+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:58.822381+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:59.822559+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:00.822802+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:01.822994+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:02.823162+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:03.823293+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:04.823450+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:05.823561+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:06.823672+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:07.823795+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:08.823942+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:09.824097+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:10.824245+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:11.824358+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:12.824513+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:13.824660+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:14.824791+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:15.824924+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:16.825071+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:17.825190+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:18.825316+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:19.825451+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:20.825567+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:21.825712+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:22.825856+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:23.825990+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:24.826126+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:25.826258+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:26.826370+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:27.826498+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:28.826622+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:29.826735+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:30.826861+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:31.827013+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:32.827202+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:33.827408+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:34.827581+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:35.827710+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:36.827863+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:37.828030+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:38.828153+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:39.828262+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:40.828412+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:41.828554+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:42.828720+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:43.828866+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:44.829304+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:45.829428+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:46.829548+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:47.829672+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:48.829804+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:49.829946+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:50.830086+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:51.830210+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:52.830364+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:53.830473+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:54.830725+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:55.830891+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:56.831067+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:57.831256+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:58.831380+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:59.831498+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:00.831605+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:01.831754+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:02.831914+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:03.832059+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:04.832216+0000)
Nov 24 18:59:43 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 24 18:59:43 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1836565426' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:05.832347+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:06.832615+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:07.832823+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:08.833129+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:09.833366+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:10.833630+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:11.833754+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:12.833914+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:13.834031+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:14.834153+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:15.834282+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:16.834395+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:17.834579+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:18.834694+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:19.834962+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:20.835122+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:21.835285+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:22.835436+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:23.835603+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:24.835767+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:25.835933+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:26.836073+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:27.836185+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:28.836369+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:29.836571+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:30.836752+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:31.836919+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:32.837211+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:33.837431+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:34.837595+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:35.837928+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:36.838053+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:37.838239+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:38.838432+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:39.838598+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:40.838799+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:41.838957+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:42.839097+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:43.839216+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:44.839341+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:45.839500+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:46.839646+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:47.839787+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:48.839965+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:49.840168+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:50.840433+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:51.840561+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:52.840719+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:53.840969+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:54.841392+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:55.841798+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:56.841944+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:57.842079+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:58.842195+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:59.842321+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:00.842448+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:01.842602+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:02.842754+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:03.842942+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:04.843078+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 983040 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:05.843220+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 983040 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:06.843358+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 983040 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:07.843559+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 983040 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:08.843742+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 983040 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:09.844003+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:10.844194+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:11.844317+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:12.844548+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:13.844665+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:14.844880+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:15.845058+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:16.845180+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:17.845306+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:18.845430+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:19.845557+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:20.845695+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:21.845833+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:22.845992+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:23.846111+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:24.846230+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:25.846359+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:26.846460+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:27.846590+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:28.846699+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:29.846926+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:30.847029+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:31.847127+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:32.847271+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:33.849351+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:34.849487+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:35.849605+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:36.849715+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:37.849869+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:38.850062+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:39.850176+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:40.850273+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:41.850422+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:42.850591+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:43.850726+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:44.850859+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:45.850948+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:46.851066+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:47.851228+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:48.851754+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:49.851985+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:50.852104+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:51.852635+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:52.852811+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:53.853082+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:54.863078+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:55.863394+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:56.863554+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:57.863690+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:58.863981+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:59.864095+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:00.864263+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:01.864407+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:02.864578+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:03.864704+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:04.864826+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:05.864963+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:06.865102+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:07.865276+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:08.865438+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:09.865565+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:10.865736+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:11.865929+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:12.866152+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:13.866303+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:14.866462+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:15.866619+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:16.866747+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:17.866882+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:18.867090+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:19.867239+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:20.867385+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:21.867582+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:22.867790+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:23.867920+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:24.868033+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:25.868142+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:26.868257+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:27.868373+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:28.868490+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:29.868647+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:30.868780+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:31.868971+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:32.869191+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:33.869327+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:34.869481+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:35.869681+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:36.869958+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:37.870176+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:38.870348+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:39.870524+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:40.870650+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:41.870788+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:42.871038+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:43.871210+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:44.871362+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:45.871510+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:46.871652+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:47.871923+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:48.872080+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:49.872183+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:50.872299+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:51.872427+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:52.872641+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:53.872766+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:54.872912+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:55.873058+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:56.873176+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:57.873358+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:58.873511+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:59.873629+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:00.873811+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:01.873928+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:02.874076+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:03.874214+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:04.874336+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:05.874481+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:06.874637+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:07.874788+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:08.874967+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:09.875129+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:10.875338+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:11.875461+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:12.875605+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:13.875731+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:14.875890+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:15.876093+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:16.876228+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:17.876351+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Cumulative writes: 5662 writes, 23K keys, 5662 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5662 writes, 859 syncs, 6.59 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.03              0.00         1    0.027       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92a430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92a430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92a430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.019       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:18.876492+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:19.876654+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:20.876772+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:21.876936+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:22.877114+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:23.877218+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:24.877332+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:25.877495+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:26.877623+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:27.877772+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:28.877889+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:29.878039+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:30.878156+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:31.878326+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:32.878545+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:33.878635+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:34.878749+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:35.878878+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:36.878949+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:37.879060+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:38.879181+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:39.879291+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:40.879406+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:41.879526+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:42.879662+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:43.879783+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:44.879953+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:45.880066+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:46.880193+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:47.880305+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:48.880467+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:49.880704+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:50.880830+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:51.880956+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:52.881518+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:53.881631+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 884736 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:54.881770+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 884736 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:55.881946+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:56.882071+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:57.882248+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:58.882375+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:59.882499+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:00.882640+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:01.882776+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:02.883024+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:03.883139+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:04.883255+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:05.883368+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:06.883479+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:07.883625+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:08.883789+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:09.883930+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:10.884049+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:11.884169+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:12.884304+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:13.884438+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:14.884607+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:15.884751+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:16.884870+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:17.885010+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:18.885119+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:19.885231+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 851968 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:20.885363+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:21.885500+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:22.885745+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:23.885878+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:24.886012+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:25.886128+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:26.886386+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:27.886503+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:28.886624+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 884736 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:29.886994+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 599.769165039s of 600.112915039s, submitted: 90
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 1744896 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:30.887112+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:31.887406+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:32.887545+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:33.887718+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:34.887873+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:35.887980+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:36.888151+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:37.888279+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:38.888423+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:39.888684+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:40.888833+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:41.888957+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:42.889163+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:43.889359+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:44.889488+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:45.889623+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:46.889769+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:47.889963+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:48.890096+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:49.890235+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:50.890333+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:51.890447+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:52.890598+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:53.890756+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:54.890948+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:55.891065+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:56.891193+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:57.891370+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:58.891511+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:59.891665+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:00.891841+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:01.892631+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:02.893417+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:03.893647+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:04.893778+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68640768 unmapped: 1556480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:05.893934+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68640768 unmapped: 1556480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:06.894051+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68640768 unmapped: 1556480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:07.894160+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68640768 unmapped: 1556480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:08.894288+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68640768 unmapped: 1556480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:09.894445+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:10.894568+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:11.894719+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:12.894873+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:13.895001+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:14.895127+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:15.895232+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:16.895345+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:17.895477+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:18.895593+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:19.895703+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:20.895852+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:21.896003+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:22.896213+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:23.896368+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:24.896494+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:25.896612+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:26.896749+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:27.896953+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:28.897271+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:29.897441+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:30.897578+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:31.897750+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:32.897961+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:33.898127+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:34.898244+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:35.898364+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:36.898487+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:37.898614+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:38.898754+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:39.898988+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:40.899095+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:41.899203+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:42.899359+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:43.899512+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:44.899633+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:45.899802+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:46.899943+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:47.900084+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:48.900230+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:49.900446+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:50.900675+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:51.900825+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:52.900976+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:53.901093+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:54.901227+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:55.901353+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:56.901481+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:57.901592+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:58.901731+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:59.901850+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:00.901976+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:01.902101+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:02.902290+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:03.902446+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:04.902589+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:05.902722+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:06.902961+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:07.903217+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:08.903417+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:09.903642+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:10.903833+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:11.904017+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:12.904239+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:13.904413+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:14.904571+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:15.904794+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:16.905009+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:17.905153+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:18.905311+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:19.905555+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:20.905747+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:21.905940+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:22.906107+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:23.906293+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:24.906424+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:25.906581+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:26.906707+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:27.906854+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:28.906971+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:29.907116+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:30.907270+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:31.907415+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:32.907592+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:33.907706+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:34.907842+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:35.908003+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:36.908154+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:37.908280+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:38.908448+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:39.908771+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:40.909136+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:41.909400+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:42.909713+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:43.909997+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:44.910293+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:45.910531+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:46.910745+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:47.910986+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:48.911249+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:49.911478+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:50.911718+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:51.911952+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:52.912203+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:53.912399+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:54.912594+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:55.912808+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:56.912975+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:57.913096+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:58.913226+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:59.913377+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:00.913535+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:01.913740+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:02.913932+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:03.914074+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:04.914226+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:05.914431+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:06.914609+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:07.914735+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:08.914852+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:09.914991+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:10.915153+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:11.915277+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:12.915479+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:13.915661+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:14.915836+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:15.915986+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:16.916204+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:17.916342+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:18.916515+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:19.916652+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:20.916777+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:21.916960+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:22.917133+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:23.917255+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:24.917413+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:25.917562+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:26.917695+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:27.917833+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:28.917982+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:29.918162+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:30.918302+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:31.918555+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:32.918756+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:33.919050+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:34.919305+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:35.919510+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:36.919777+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:37.919965+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:38.920102+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:39.920249+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:40.920386+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:41.920545+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:42.920781+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:43.920959+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:44.921277+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:45.921844+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:46.923084+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:47.923428+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:48.923567+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:49.924462+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:50.924984+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:51.925223+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:52.925823+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:53.926137+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:54.926719+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:55.926977+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:56.927136+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:57.927267+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:58.927429+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:59.927591+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:00.927840+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:01.928083+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:02.928245+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:03.928362+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:04.928493+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:05.928691+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:06.928840+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:07.928945+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:08.929056+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:09.929185+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:10.929412+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:11.929536+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:12.929684+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:13.929864+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:14.930029+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:15.930256+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:16.930417+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:17.930541+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:18.930658+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:19.930723+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:20.930843+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:21.930960+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:22.931121+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:23.931258+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:24.931467+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:25.931634+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:26.931793+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:27.931940+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:28.932089+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:29.932221+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:30.932370+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:31.932471+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:32.932630+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:33.932711+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:34.932852+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:35.933034+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:36.933216+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:37.933353+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:38.933525+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:39.933677+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:40.933827+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:41.934105+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:42.934306+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:43.934451+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:44.934651+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:45.934778+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:46.934978+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:47.935121+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:48.935259+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:49.935389+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:50.935563+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:51.935695+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:52.936016+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:53.936153+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:54.936346+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:55.936506+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:56.936637+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:57.936765+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:58.936945+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:59.937351+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:00.937507+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:01.937670+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:02.937873+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:03.938022+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:04.938269+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:05.938415+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:06.938545+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:07.938705+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:08.938867+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:09.938983+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:10.939146+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:11.939312+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:12.939474+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:13.939597+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:14.939723+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:15.939956+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:16.940098+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:17.940255+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:18.940378+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:19.940502+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:20.941691+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:21.942659+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:22.943466+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:23.943990+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:24.944304+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:25.944496+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:26.947277+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:27.949110+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:28.951614+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:29.952322+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:30.952477+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:31.952839+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:32.952998+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:33.953874+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:34.954441+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:35.954644+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:36.954949+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:37.955141+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:38.955448+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:39.955624+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:40.955861+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:41.956060+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:42.956348+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:43.956548+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:44.956752+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:45.956940+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:46.957184+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:47.957363+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:48.957530+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:49.957672+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:50.957849+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:51.958062+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:52.958253+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:53.958415+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:54.958571+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:55.958677+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:56.958817+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:57.959021+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:58.959199+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:59.959349+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:00.959470+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:01.959603+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:02.959773+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:03.959976+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:04.960129+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:05.960271+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:06.960456+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:07.960657+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:08.960813+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:09.960987+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:10.961101+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:11.961260+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:12.961493+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:13.961618+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:14.961754+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:15.961961+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:16.962096+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:17.962222+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:18.962355+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:19.962582+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:20.962724+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:21.962952+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:22.963161+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:23.963267+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:24.963418+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:25.963595+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:26.964624+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:27.964791+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:28.965704+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:29.965864+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:30.966709+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:31.967353+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:32.967518+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:33.967782+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:34.967935+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:35.968326+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:36.968890+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:37.969181+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:38.969522+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:39.969719+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:40.969834+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:41.969961+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:42.970231+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861424400
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:43.970422+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 1425408 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:44.970582+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 1425408 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 120 handle_osd_map epochs [121,122], i have 120, src has [1,122]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 375.062835693s of 375.391784668s, submitted: 90
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:45.970791+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 9699328 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab4000/0x0/0x4ffc00000, data 0xaf0c9/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:46.971042+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70025216 unmapped: 16957440 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 123 ms_handle_reset con 0x556861424400 session 0x556861a3e000
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:47.971306+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70041600 unmapped: 16941056 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 123 heartbeat osd_stat(store_statfs(0x4fbab4000/0x0/0x4ffc00000, data 0x10af0c9/0x1169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949338 data_alloc: 218103808 data_used: 221184
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 123 heartbeat osd_stat(store_statfs(0x4fbab0000/0x0/0x4ffc00000, data 0x10b0c85/0x116d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fa43c00
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:48.971426+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70172672 unmapped: 16809984 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 123 handle_osd_map epochs [123,124], i have 123, src has [1,124]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 124 handle_osd_map epochs [124,124], i have 124, src has [1,124]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 124 ms_handle_reset con 0x55685fa43c00 session 0x556861a3e1e0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:49.971635+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 16613376 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:50.971847+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fbaaa000/0x0/0x4ffc00000, data 0x10b2851/0x1172000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:51.971976+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 124 handle_osd_map epochs [124,125], i have 124, src has [1,125]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:52.972268+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959866 data_alloc: 218103808 data_used: 221184
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:53.972449+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:54.972706+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbaa8000/0x0/0x4ffc00000, data 0x10b42b4/0x1175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:55.972999+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:56.973180+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:57.973442+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959866 data_alloc: 218103808 data_used: 221184
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbaa8000/0x0/0x4ffc00000, data 0x10b42b4/0x1175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:58.973567+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:59.973699+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:00.973870+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:01.974013+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbaa8000/0x0/0x4ffc00000, data 0x10b42b4/0x1175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:02.974172+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbaa8000/0x0/0x4ffc00000, data 0x10b42b4/0x1175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959866 data_alloc: 218103808 data_used: 221184
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:03.974301+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:04.974463+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbaa8000/0x0/0x4ffc00000, data 0x10b42b4/0x1175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:05.974588+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:06.974722+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbaa8000/0x0/0x4ffc00000, data 0x10b42b4/0x1175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:07.974842+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960026 data_alloc: 218103808 data_used: 225280
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:08.974987+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:09.975133+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:10.975293+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:11.975432+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbaa8000/0x0/0x4ffc00000, data 0x10b42b4/0x1175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:12.975666+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960026 data_alloc: 218103808 data_used: 225280
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbaa8000/0x0/0x4ffc00000, data 0x10b42b4/0x1175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:13.975824+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:14.975984+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:15.976144+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:16.976317+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:17.976496+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960026 data_alloc: 218103808 data_used: 225280
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:18.976668+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbaa8000/0x0/0x4ffc00000, data 0x10b42b4/0x1175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:19.976831+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:20.977016+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685ecad000
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 35.260654449s of 35.765483856s, submitted: 60
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:21.977183+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70451200 unmapped: 16531456 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 126 ms_handle_reset con 0x55685ecad000 session 0x556861bab0e0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fbaa8000/0x0/0x4ffc00000, data 0x10b42d7/0x1176000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:22.977388+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fbaa4000/0x0/0x4ffc00000, data 0x10b5e54/0x1179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70451200 unmapped: 16531456 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fa43c00
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964309 data_alloc: 218103808 data_used: 233472
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:23.977533+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70483968 unmapped: 16498688 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 127 ms_handle_reset con 0x55685fa43c00 session 0x556861bab680
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:24.977718+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70565888 unmapped: 16416768 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:25.977986+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70565888 unmapped: 16416768 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fee0c00
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:26.978146+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 71614464 unmapped: 15368192 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 128 ms_handle_reset con 0x55685fee0c00 session 0x55685ff2cf00
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:27.978319+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 15376384 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971712 data_alloc: 218103808 data_used: 249856
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fba9e000/0x0/0x4ffc00000, data 0x10b999b/0x117f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:28.978485+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 15376384 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fee1000
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:29.978598+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 15056896 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861424400
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:30.979173+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 22200320 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.523596764s of 10.044019699s, submitted: 88
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 129 ms_handle_reset con 0x556861424400 session 0x556861a3fc20
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685ecad400
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:31.979354+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fee1c00
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 21004288 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 129 handle_osd_map epochs [129,130], i have 129, src has [1,130]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 130 ms_handle_reset con 0x55685fee1000 session 0x55685f1890e0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 130 ms_handle_reset con 0x55685ecad400 session 0x55685ff31e00
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685ecad400
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 131 ms_handle_reset con 0x55685fee1c00 session 0x556861b321e0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 131 ms_handle_reset con 0x55685ecad400 session 0x556861b42960
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:32.980188+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 131 heartbeat osd_stat(store_statfs(0x4f8a8c000/0x0/0x4ffc00000, data 0x40bfbe2/0x4190000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 20930560 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fa43c00
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fee0c00
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fee1000
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334958 data_alloc: 218103808 data_used: 266240
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 131 ms_handle_reset con 0x55685fee1000 session 0x556861b33860
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861424400
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 131 ms_handle_reset con 0x556861424400 session 0x556861b42780
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:33.980606+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 19914752 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 132 ms_handle_reset con 0x55685fee0c00 session 0x55685f0854a0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685ecad400
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 132 ms_handle_reset con 0x55685fa43c00 session 0x556861b42f00
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fee1000
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f8a8a000/0x0/0x4ffc00000, data 0x40bfc15/0x4192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:34.980842+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fee1c00
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 19832832 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 133 ms_handle_reset con 0x55685fee1000 session 0x556861b5f4a0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 133 ms_handle_reset con 0x55685ecad400 session 0x55685f048960
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 133 ms_handle_reset con 0x55685fee1c00 session 0x556861b43860
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861424400
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 133 ms_handle_reset con 0x556861424400 session 0x556861b5f680
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685ecad400
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fa43c00
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 133 ms_handle_reset con 0x55685fa43c00 session 0x55685f049c20
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:35.981155+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 18759680 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 134 ms_handle_reset con 0x55685ecad400 session 0x556861b5fa40
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fee1000
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:36.981621+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fba7d000/0x0/0x4ffc00000, data 0x10c5d75/0x119e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 18751488 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 135 ms_handle_reset con 0x55685fee1000 session 0x55685ff2cf00
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:37.981769+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fee1c00
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 18718720 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 136 ms_handle_reset con 0x55685fee1c00 session 0x55685f0854a0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1030796 data_alloc: 218103808 data_used: 266240
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:38.982125+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861424800
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 18628608 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x5568610d1000
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:39.982243+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 18595840 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 137 ms_handle_reset con 0x556861424800 session 0x556861b74f00
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 137 ms_handle_reset con 0x5568610d1000 session 0x55685e9225a0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:40.982565+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 17547264 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685ecad000
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fba77000/0x0/0x4ffc00000, data 0x10cafaa/0x11a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:41.982814+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 17547264 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.963214874s of 11.130927086s, submitted: 311
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 138 ms_handle_reset con 0x55685ecad000 session 0x556861b8c780
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:42.983241+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685ecad400
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 17514496 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fba73000/0x0/0x4ffc00000, data 0x10cda61/0x11a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1042431 data_alloc: 218103808 data_used: 278528
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:43.983355+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 140 ms_handle_reset con 0x55685ecad400 session 0x556861b8cf00
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 17448960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fa42800
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:44.983510+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 17448960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 141 ms_handle_reset con 0x55685fa42800 session 0x556861b8dc20
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fa43c00
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 141 ms_handle_reset con 0x55685fa43c00 session 0x55686331a1e0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685ecad000
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:45.983641+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685ecad400
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 17309696 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fba73000/0x0/0x4ffc00000, data 0x10d035e/0x11aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 142 ms_handle_reset con 0x55685ecad400 session 0x556861a32b40
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fa42800
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:46.983827+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 17170432 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 142 handle_osd_map epochs [142,143], i have 142, src has [1,143]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x5568610d1000
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 143 ms_handle_reset con 0x5568610d1000 session 0x556861b321e0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 143 ms_handle_reset con 0x55685ecad000 session 0x55686331a780
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fee1c00
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 143 ms_handle_reset con 0x55685fee1c00 session 0x55686331af00
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 143 ms_handle_reset con 0x55685fa42800 session 0x556860d0c1e0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:47.984243+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 17203200 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053039 data_alloc: 218103808 data_used: 290816
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:48.984448+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 17170432 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fba6b000/0x0/0x4ffc00000, data 0x10d590a/0x11b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:49.984643+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 17170432 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:50.984839+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 17162240 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:51.984985+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 17162240 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:52.985140+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 17162240 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053039 data_alloc: 218103808 data_used: 290816
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fba6b000/0x0/0x4ffc00000, data 0x10d590a/0x11b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 144 handle_osd_map epochs [145,145], i have 145, src has [1,145]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.896071434s of 11.615738869s, submitted: 215
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:53.985306+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 17145856 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:54.985534+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 17145856 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:55.985727+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 17145856 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fa42800
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:56.985880+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 17145856 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 145 handle_osd_map epochs [145,146], i have 145, src has [1,146]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 145 handle_osd_map epochs [146,146], i have 146, src has [1,146]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:57.986082+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 17137664 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 147 ms_handle_reset con 0x55685fa42800 session 0x55686331b680
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063545 data_alloc: 218103808 data_used: 290816
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:58.986240+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685ecad000
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685ecad400
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 17080320 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fba62000/0x0/0x4ffc00000, data 0x10dab66/0x11ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:59.986401+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 17096704 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:00.986520+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 17096704 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fee1000
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:01.986659+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861a5e800
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 17096704 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 148 handle_osd_map epochs [148,148], i have 148, src has [1,148]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 148 ms_handle_reset con 0x556861a5e800 session 0x55686331bc20
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 148 ms_handle_reset con 0x55685fee1000 session 0x556861a3e960
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 148 handle_osd_map epochs [148,149], i have 148, src has [1,149]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:02.987270+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861a5e400
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78364672 unmapped: 17014784 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 149 ms_handle_reset con 0x556861a5e400 session 0x556861a32b40
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1070073 data_alloc: 218103808 data_used: 307200
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fba5c000/0x0/0x4ffc00000, data 0x10de2b4/0x11c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:03.987971+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fc06c00
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.863252640s of 10.083137512s, submitted: 82
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 149 ms_handle_reset con 0x55685fc06c00 session 0x556863375c20
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fa42800
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 16809984 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 149 handle_osd_map epochs [149,150], i have 149, src has [1,150]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 149 handle_osd_map epochs [150,150], i have 150, src has [1,150]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 150 ms_handle_reset con 0x55685fa42800 session 0x55686331ad20
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:04.988436+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fee1000
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 16809984 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:05.988718+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78577664 unmapped: 16801792 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 151 ms_handle_reset con 0x55685fee1000 session 0x5568633743c0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861a5e400
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:06.988884+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 16793600 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fba56000/0x0/0x4ffc00000, data 0x10e1a95/0x11c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 152 ms_handle_reset con 0x556861a5e400 session 0x55686331a000
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:07.989069+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 16793600 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fba51000/0x0/0x4ffc00000, data 0x10e366d/0x11cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082688 data_alloc: 218103808 data_used: 307200
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:08.989292+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 16793600 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:09.989657+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 16793600 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:10.990114+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 16793600 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:11.990330+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 16793600 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861a5e800
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 152 ms_handle_reset con 0x556861a5e800 session 0x5568633743c0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861ae1000
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861ae1400
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 153 ms_handle_reset con 0x556861ae1400 session 0x5568633a6000
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 153 ms_handle_reset con 0x556861ae1000 session 0x55686331b680
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:12.990478+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fa42800
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 153 ms_handle_reset con 0x55685fa42800 session 0x55686331ad20
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fee1c00
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 153 ms_handle_reset con 0x55685fee1c00 session 0x556861a3e960
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fee1000
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 153 ms_handle_reset con 0x55685fee1000 session 0x556860d0c1e0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861a5fc00
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 153 ms_handle_reset con 0x556861a5fc00 session 0x5568632121e0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fba4e000/0x0/0x4ffc00000, data 0x10e512b/0x11cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 16744448 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fba4e000/0x0/0x4ffc00000, data 0x10e512b/0x11cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086691 data_alloc: 218103808 data_used: 315392
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:13.990695+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 16744448 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fa42800
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.245035172s of 10.595973969s, submitted: 74
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 153 ms_handle_reset con 0x55685fa42800 session 0x5568632125a0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fee1000
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fee1c00
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:14.990828+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78938112 unmapped: 16441344 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:15.990980+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78938112 unmapped: 16441344 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:16.991116+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861ae1000
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 153 ms_handle_reset con 0x556861ae1000 session 0x55685f04a960
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861a5e800
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78938112 unmapped: 16441344 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fba2b000/0x0/0x4ffc00000, data 0x110914a/0x11f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 153 handle_osd_map epochs [153,154], i have 153, src has [1,154]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:17.991241+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 154 ms_handle_reset con 0x556861a5e800 session 0x55685f250780
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861a5e400
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861a5d000
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 154 ms_handle_reset con 0x556861a5d000 session 0x5568632130e0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 154 ms_handle_reset con 0x556861a5e400 session 0x556861b752c0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fa42800
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 154 ms_handle_reset con 0x55685fa42800 session 0x556861b42d20
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861a5d000
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 16400384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094730 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:18.991437+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 155 ms_handle_reset con 0x556861a5d000 session 0x556861528000
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 16400384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 155 handle_osd_map epochs [155,156], i have 155, src has [1,156]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:19.991568+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861a5e800
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 156 ms_handle_reset con 0x556861a5e800 session 0x556861bab860
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 16400384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861ae1000
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 156 ms_handle_reset con 0x556861ae1000 session 0x556861b8c3c0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861a5d400
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:20.991681+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 156 ms_handle_reset con 0x556861a5d400 session 0x556861b8c1e0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861a5d400
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 156 ms_handle_reset con 0x556861a5d400 session 0x556861ab4f00
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 16400384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:21.991838+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 16400384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fa42800
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fba1e000/0x0/0x4ffc00000, data 0x110e473/0x11fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 156 ms_handle_reset con 0x55685fa42800 session 0x556861ab4d20
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:22.991977+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x556861a5d000
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 16515072 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100812 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:23.992123+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78880768 unmapped: 16498688 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 157 ms_handle_reset con 0x556861a5d000 session 0x5568615290e0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:24.992295+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78880768 unmapped: 16498688 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 157 ms_handle_reset con 0x55685fee1000 session 0x556863212f00
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.930842400s of 11.172493935s, submitted: 53
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 157 ms_handle_reset con 0x55685fee1c00 session 0x5568633a61e0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fa42800
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:25.992431+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fba1e000/0x0/0x4ffc00000, data 0x111001e/0x11ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [0,0,1])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78888960 unmapped: 16490496 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:26.992658+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _renew_subs
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 158 ms_handle_reset con 0x55685fa42800 session 0x5568632132c0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78905344 unmapped: 16474112 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:27.992842+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78905344 unmapped: 16474112 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1106820 data_alloc: 218103808 data_used: 327680
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:28.993075+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 16457728 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 159 heartbeat osd_stat(store_statfs(0x4fba3d000/0x0/0x4ffc00000, data 0x10ef625/0x11df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:29.993248+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 16457728 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 159 ms_handle_reset con 0x55685ecad000 session 0x55686331ba40
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 159 ms_handle_reset con 0x55685ecad400 session 0x556861bab0e0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:30.993713+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fee1000
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 159 ms_handle_reset con 0x55685fee1000 session 0x5568633a6780
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 16457728 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:31.993977+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685fee1c00
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 16457728 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 159 ms_handle_reset con 0x55685fee1c00 session 0x5568633a6b40
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685ecad000
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 159 handle_osd_map epochs [159,160], i have 159, src has [1,160]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:32.994277+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 160 ms_handle_reset con 0x55685ecad000 session 0x5568633a72c0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1108734 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:33.994529+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fba3b000/0x0/0x4ffc00000, data 0x10f122e/0x11e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:34.994696+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:35.994829+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:36.994950+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:37.995079+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1108734 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:38.995202+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fba3b000/0x0/0x4ffc00000, data 0x10f122e/0x11e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.445519447s of 13.752939224s, submitted: 101
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:39.995379+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:40.995517+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:41.995687+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:42.995849+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:43.995982+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:44.996119+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:45.996422+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:46.996733+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:47.996973+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:48.997170+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:49.997340+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:50.997530+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:51.997720+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:52.998006+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:53.998164+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:54.998293+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:55.998481+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:56.998746+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:57.999015+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:58.999156+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:59.999303+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:00.999439+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:01.999620+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:02.999830+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:03.999971+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:05.000135+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:06.000307+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:07.000446+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:08.000558+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:09.000720+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:10.000971+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:11.001172+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:12.001345+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:13.001516+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:14.001649+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:15.001808+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:16.002193+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:17.002378+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.2 total, 600.0 interval
                                           Cumulative writes: 7658 writes, 29K keys, 7658 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7658 writes, 1723 syncs, 4.44 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1996 writes, 5287 keys, 1996 commit groups, 1.0 writes per commit group, ingest: 2.75 MB, 0.00 MB/s
                                           Interval WAL: 1996 writes, 864 syncs, 2.31 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:18.002542+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:19.002675+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:20.002822+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:21.002992+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:22.003143+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:23.003305+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: mgrc ms_handle_reset ms_handle_reset con 0x55685f53c000
Nov 24 18:59:43 compute-0 ceph-osd[90655]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/536471675
Nov 24 18:59:43 compute-0 ceph-osd[90655]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/536471675,v1:192.168.122.100:6801/536471675]
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: get_auth_request con 0x556861ae1400 auth_method 0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: mgrc handle_mgr_configure stats_period=5
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:24.003478+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:25.003623+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:26.003812+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:27.003948+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:28.004086+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:29.004203+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:30.004370+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:31.004502+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:32.004629+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:33.004808+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:34.004965+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:35.005085+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:36.005220+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:37.005311+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:38.005444+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:39.005594+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:40.005718+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:41.005857+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:42.006007+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:43.006173+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:44.006322+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:45.006495+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:46.006627+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:47.006748+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:48.006891+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:49.007048+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:50.007189+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:51.007325+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:52.007501+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:53.007677+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:54.007823+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:55.007960+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:56.008127+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:57.008337+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:58.008497+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:59.008666+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:00.008786+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:01.008916+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:02.009030+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:03.009168+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:04.009301+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:05.009433+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:06.009661+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:07.009821+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:08.010012+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:09.010156+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:10.010268+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:11.010395+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:12.010521+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:13.010711+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:14.010865+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:15.011181+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:16.011296+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 16146432 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:17.011413+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: do_command 'config diff' '{prefix=config diff}'
Nov 24 18:59:43 compute-0 ceph-osd[90655]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: do_command 'config show' '{prefix=config show}'
Nov 24 18:59:43 compute-0 ceph-osd[90655]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 24 18:59:43 compute-0 ceph-osd[90655]: do_command 'counter dump' '{prefix=counter dump}'
Nov 24 18:59:43 compute-0 ceph-osd[90655]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79536128 unmapped: 15843328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: do_command 'counter schema' '{prefix=counter schema}'
Nov 24 18:59:43 compute-0 ceph-osd[90655]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:18.011542+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 15564800 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:19.011685+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 15515648 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: do_command 'log dump' '{prefix=log dump}'
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:20.011921+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Nov 24 18:59:43 compute-0 ceph-osd[90655]: do_command 'perf dump' '{prefix=perf dump}'
Nov 24 18:59:43 compute-0 ceph-osd[90655]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Nov 24 18:59:43 compute-0 ceph-osd[90655]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Nov 24 18:59:43 compute-0 ceph-osd[90655]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Nov 24 18:59:43 compute-0 ceph-osd[90655]: do_command 'perf schema' '{prefix=perf schema}'
Nov 24 18:59:43 compute-0 ceph-osd[90655]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79831040 unmapped: 15548416 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:21.012031+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 15474688 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:22.012235+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 ms_handle_reset con 0x55685fee0400 session 0x55685fab10e0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: handle_auth_request added challenge on 0x55685ecad400
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 15466496 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:23.012374+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 15466496 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:24.012483+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 15466496 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:25.012597+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 15466496 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:26.013135+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 15466496 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:27.013257+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 15466496 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:28.013373+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 15466496 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:29.013509+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 15466496 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:30.013624+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 110.551521301s of 110.563240051s, submitted: 58
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79929344 unmapped: 15450112 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:31.013741+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:32.013862+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:33.014044+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:34.014200+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:35.014320+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:36.014487+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:37.014612+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:38.014742+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:39.014858+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:40.015000+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:41.015122+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:42.015291+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:43.015461+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:44.015572+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:45.015695+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:46.015883+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:47.016055+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:48.016179+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:49.016333+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:50.016480+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:51.016824+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:52.017466+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:53.017871+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:54.018078+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:55.018331+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:56.018451+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:57.018570+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:58.018687+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:59.019019+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:00.019158+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:01.019502+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:02.019709+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:03.020010+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:04.020188+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:05.020435+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 15417344 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:06.020571+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 15417344 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:07.020797+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 15417344 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:08.021088+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 15417344 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:09.021338+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 15417344 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:10.021463+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 15417344 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:11.021587+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 15417344 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:12.021719+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 15417344 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:13.021869+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 15417344 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:14.021966+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 15417344 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:15.022158+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 15417344 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:16.022289+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 15417344 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:17.022511+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:59:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:59:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:59:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 18:59:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:59:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 18:59:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:59:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:59:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:59:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 18:59:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:59:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 18:59:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:59:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:59:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:59:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 18:59:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:59:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 18:59:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:59:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:59:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:59:43 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:18.022740+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:19.022893+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:20.023040+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:21.023153+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:22.023296+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:23.023536+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:24.023692+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:25.023864+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:26.024023+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:27.024232+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:28.024430+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:29.024680+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:30.024979+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:31.025170+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:32.025320+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:33.025467+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:34.025629+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:35.025828+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:36.026009+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:37.026157+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:38.026354+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:39.026508+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:40.026625+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:41.026837+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:42.027061+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:43.027229+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:44.027452+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:45.027601+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:46.027749+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:47.027952+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:48.028087+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:49.028445+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:50.028680+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:51.028941+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:52.029174+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:53.029429+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:54.029652+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:55.029848+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:56.030026+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:57.030265+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:58.030514+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:59.030748+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:00.030969+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:01.031177+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:02.031372+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:03.031689+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:04.031837+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:05.031988+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:06.032105+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:07.032353+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:08.032490+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:09.032645+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:10.032777+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:11.032953+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:12.033130+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:13.033379+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:14.033577+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:15.033733+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:16.033992+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:17.034182+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:18.034382+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:19.034537+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:20.034699+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:21.034845+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:22.035021+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:23.035233+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:24.035376+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:25.035597+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:26.035734+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:27.035868+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:28.036024+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:29.036167+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:30.036347+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:31.036486+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:32.036603+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:33.036784+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:34.036945+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:35.037111+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:36.037223+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:37.037317+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:38.037413+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:39.037569+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:40.037709+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:41.037857+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:42.038025+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:43.038182+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:44.038333+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:45.038454+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:46.038559+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:47.038790+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:48.038977+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:49.039145+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:50.039310+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:51.039443+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:52.039625+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:53.039814+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:54.040018+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:55.040204+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:56.040387+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:57.040601+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:58.040770+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:59.041005+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:00.041178+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:01.041335+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:02.041495+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:03.041702+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:04.042062+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:05.042298+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:06.042481+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:07.042649+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:08.042822+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:09.042962+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:10.043139+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:11.043317+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 15400960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:12.043466+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 15400960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:13.043662+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 15400960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:14.043798+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 15400960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:15.044004+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 15400960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:16.044149+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 15400960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:17.044388+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 15400960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:18.044537+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 15400960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:19.044746+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 15400960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:20.044923+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 15400960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:21.045138+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 15400960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:22.045283+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 15400960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:23.045535+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:24.045793+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:25.046003+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:26.046177+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:27.046343+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:28.046518+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:29.046666+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:30.046807+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:31.046969+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:32.047181+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:33.047437+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:34.047781+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:35.048023+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:36.048163+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:37.048329+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:38.048470+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:39.048582+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:40.048746+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:41.048983+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:42.049216+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:43.049439+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:44.049570+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:45.049698+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:46.049852+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:47.050013+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:48.050220+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:49.050342+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:50.050452+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:51.050587+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:52.050748+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:53.050948+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:54.051080+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:55.051228+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:56.051387+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:57.051517+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:58.051655+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:59.052778+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:00.054008+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:01.054258+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:02.055310+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:03.055647+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:04.056271+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:05.056513+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:06.056867+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:07.057123+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:08.057382+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:09.057527+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:10.057959+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:11.058380+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:12.058538+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:13.059057+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:14.059219+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:15.059577+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:16.059760+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:17.059980+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:18.060254+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:19.060616+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:20.060738+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:21.060934+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:22.061329+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:23.061522+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:24.061794+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:25.061936+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:26.062243+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:27.062546+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:28.062784+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:29.063423+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:30.063660+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:31.063884+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:32.064410+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:33.064884+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80019456 unmapped: 15360000 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:34.065352+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80019456 unmapped: 15360000 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:35.065671+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80019456 unmapped: 15360000 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:36.066014+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80019456 unmapped: 15360000 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:37.066321+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80019456 unmapped: 15360000 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:38.066633+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80019456 unmapped: 15360000 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:39.066891+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80019456 unmapped: 15360000 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:40.067204+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80019456 unmapped: 15360000 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:41.067402+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:42.067594+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:43.067798+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:44.067979+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:45.068156+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:46.068330+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:47.068478+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:48.068661+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:49.068813+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:50.068971+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:51.069155+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:52.069373+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:53.069593+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:54.069781+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:55.069980+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:56.070130+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:57.070284+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:58.070485+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:59.070617+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:00.070766+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:01.071074+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:02.071254+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:03.071436+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:04.071583+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:05.071734+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:06.071890+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:07.072069+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:08.072233+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:09.072380+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:10.072562+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:11.072724+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:12.072845+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:13.073026+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:14.073166+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:15.073300+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:16.073528+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:17.073713+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:18.073935+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:19.074163+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:20.074415+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:21.074671+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:22.074946+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:23.075183+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:24.075399+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:25.075566+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:26.075796+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:27.076067+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:28.076262+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:29.076408+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:30.076536+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:31.076649+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:32.076805+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:33.077020+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:34.077253+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:35.077463+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:36.077686+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 15335424 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:37.077892+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 15335424 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:38.078077+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 15335424 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:39.078243+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 15335424 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:40.078404+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 15335424 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:41.078562+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 15335424 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:42.078756+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 15335424 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:43.078988+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 15335424 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:44.079116+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 15335424 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:45.079266+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 15335424 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:46.079423+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 15335424 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:47.079627+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 15335424 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:48.079792+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 15335424 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:49.079930+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 15335424 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:50.080103+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 15335424 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:51.080254+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 15327232 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:52.080438+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 15327232 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:53.080678+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 15327232 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:54.080823+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 15327232 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:55.080956+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:56.081166+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:57.081332+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:58.081521+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:59.081675+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:00.081936+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:01.082100+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:02.082277+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:03.082435+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:04.082557+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:05.082725+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:06.082868+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:07.083042+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:08.083186+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:09.083347+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:10.083474+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:11.083598+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:12.083719+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:13.083872+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:14.083995+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:15.084149+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:16.084349+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:17.084480+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:18.084590+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:19.084767+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:20.084887+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:21.085085+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:22.085226+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:23.085380+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:24.085554+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:25.085686+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:26.085818+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:27.085991+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:28.086135+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:29.086306+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:30.086484+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:31.086641+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:32.086773+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:33.086953+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:34.087209+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:35.087341+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:36.087484+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:37.087659+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:38.087842+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:39.088059+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:40.089125+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:41.089524+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:42.089666+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:43.090198+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:44.090695+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:45.090974+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:46.091330+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:47.091621+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:48.091947+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:49.092067+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:50.092294+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:51.092509+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:52.092646+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:53.092867+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:54.093085+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:55.093232+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:56.093401+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:57.093550+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:58.093751+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:59.093867+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:00.093975+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:01.094274+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:02.094478+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:03.094623+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:04.094783+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:05.094948+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:06.095142+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:07.095336+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:08.095515+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:09.095659+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:10.095849+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:11.095989+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:12.096148+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:13.096243+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:14.096317+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:15.096451+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:16.096563+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:17.096670+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:18.096811+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:19.096985+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:20.097143+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:21.097293+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:22.097412+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:23.097569+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:24.097703+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:25.097863+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:26.097987+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:27.098125+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:28.098256+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:29.098387+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:30.098514+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:31.098703+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:32.098873+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:33.099120+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:34.099281+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:35.099432+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:36.099573+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:37.099697+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:38.099860+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:39.099985+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:40.100119+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:41.100246+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:42.100359+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:43.100523+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:44.100635+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:45.100747+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:46.100879+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:47.101029+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:48.101159+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:49.101294+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:50.101537+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:51.101697+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:52.101836+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:53.102007+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:54.102147+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:55.102279+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:56.102421+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:57.102571+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:58.102715+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:59.102855+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:00.102987+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:01.103100+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:02.103212+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:03.103363+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:04.103502+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:05.103610+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:06.103733+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:07.103869+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:08.103955+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:09.104068+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:10.104176+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 18:59:43 compute-0 ceph-osd[90655]: do_command 'config diff' '{prefix=config diff}'
Nov 24 18:59:43 compute-0 ceph-osd[90655]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:43 compute-0 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:43 compute-0 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:11.104281+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: do_command 'config show' '{prefix=config show}'
Nov 24 18:59:43 compute-0 ceph-osd[90655]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 24 18:59:43 compute-0 ceph-osd[90655]: do_command 'counter dump' '{prefix=counter dump}'
Nov 24 18:59:43 compute-0 ceph-osd[90655]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 24 18:59:43 compute-0 ceph-osd[90655]: do_command 'counter schema' '{prefix=counter schema}'
Nov 24 18:59:43 compute-0 ceph-osd[90655]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 15007744 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:12.104396+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 14958592 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: tick
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_tickets
Nov 24 18:59:43 compute-0 ceph-osd[90655]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:13.104560+0000)
Nov 24 18:59:43 compute-0 ceph-osd[90655]: do_command 'log dump' '{prefix=log dump}'
Nov 24 18:59:43 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15111 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:43 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 24 18:59:43 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3830832788' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 18:59:43 compute-0 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 18:59:43 compute-0 ceph-mon[74927]: pgmap v1355: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:43 compute-0 ceph-mon[74927]: from='client.15097 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:43 compute-0 ceph-mon[74927]: from='client.15101 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:43 compute-0 ceph-mon[74927]: from='client.15105 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:43 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3075409262' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 18:59:43 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1836565426' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 18:59:43 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3830832788' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 18:59:44 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15115 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:44 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1356: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:44 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 24 18:59:44 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1798673722' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 18:59:44 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15119 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:44 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 24 18:59:44 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2057798993' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 18:59:44 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15123 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:59:44 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 24 18:59:44 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4047101108' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 24 18:59:44 compute-0 ceph-mon[74927]: from='client.15107 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:44 compute-0 ceph-mon[74927]: from='client.15111 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:44 compute-0 ceph-mon[74927]: from='client.15115 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:44 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1798673722' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 18:59:44 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2057798993' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 18:59:45 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15127 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:59:45 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 24 18:59:45 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2701745305' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 24 18:59:45 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15135 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:59:45 compute-0 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:59:45.898+0000 7f6377bb5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 24 18:59:45 compute-0 ceph-mgr[75218]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 24 18:59:45 compute-0 ceph-mon[74927]: pgmap v1356: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:45 compute-0 ceph-mon[74927]: from='client.15119 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:45 compute-0 ceph-mon[74927]: from='client.15123 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:59:45 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/4047101108' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 24 18:59:45 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2701745305' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 24 18:59:46 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 24 18:59:46 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4073264559' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 24 18:59:46 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1357: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:46 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 24 18:59:46 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1201912732' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 24 18:59:46 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 24 18:59:46 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1600982427' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 24 18:59:46 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 24 18:59:46 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1809585107' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 24 18:59:46 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 24 18:59:46 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3146791330' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 24 18:59:46 compute-0 ceph-mon[74927]: from='client.15127 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:59:46 compute-0 ceph-mon[74927]: from='client.15135 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:59:46 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/4073264559' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 24 18:59:46 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1201912732' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 24 18:59:46 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1600982427' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 24 18:59:46 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1809585107' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 24 18:59:46 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3146791330' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 24 18:59:46 compute-0 crontab[299595]: (root) LIST (root)
Nov 24 18:59:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Nov 24 18:59:47 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1319776605' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 24 18:59:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 24 18:59:47 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/46257218' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:23.722638+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 180224 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:24.722745+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 180224 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:25.722887+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 172032 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:26.723025+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 172032 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:27.723140+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 172032 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:28.723272+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 163840 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:29.723450+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 163840 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:30.723601+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 155648 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:31.723776+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 155648 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:32.723950+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 147456 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:33.724095+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 147456 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:34.724245+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 147456 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:35.724411+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 139264 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:36.724516+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 139264 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:37.724616+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 131072 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:38.724787+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 131072 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:39.724963+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 114688 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:40.725079+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 114688 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:41.725190+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 106496 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:42.725378+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 106496 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:43.725533+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 106496 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:44.725688+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 106496 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:45.725862+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 106496 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:46.726014+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 106496 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:47.726183+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 98304 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:48.726743+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 98304 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:49.726885+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 98304 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:50.727050+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 90112 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:51.727204+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 90112 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:52.727329+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 81920 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:53.727439+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 81920 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:54.727623+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 73728 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:55.727779+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 73728 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:56.727920+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 65536 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:57.728056+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 65536 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:58.728194+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 65536 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:59.728279+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 57344 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:00.728436+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 57344 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:01.728562+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 49152 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:02.728686+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 49152 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:03.728954+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 40960 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:04.729207+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 40960 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:05.729355+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 40960 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:06.729541+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 32768 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:07.729686+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 32768 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:08.729779+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 24576 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:09.729914+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 24576 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:10.730023+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 16384 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:11.730140+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 16384 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:12.730271+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 16384 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:13.730428+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 8192 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:14.730619+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 8192 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:15.730771+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 0 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:16.730885+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 0 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:17.731007+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 0 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:18.731214+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1040384 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:19.731329+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1040384 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:20.731451+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 1032192 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:21.731622+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 1032192 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:22.731765+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 1032192 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:23.731886+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 1024000 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:24.732126+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 1015808 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:25.732252+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 1007616 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:26.732388+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 1007616 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:27.732508+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 999424 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:28.732637+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 999424 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:29.732772+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 991232 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:30.732890+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 991232 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:31.733128+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 991232 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:32.733308+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 983040 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:33.733428+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 983040 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:34.733566+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 974848 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:35.733717+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 974848 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:36.733827+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 974848 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:37.734052+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 966656 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:38.734232+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 966656 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:39.734399+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 950272 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:40.734614+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 950272 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:41.734813+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 942080 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:42.735011+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 942080 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:43.735177+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 942080 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:44.735390+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74588160 unmapped: 933888 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:45.735576+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74588160 unmapped: 933888 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:46.735717+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 925696 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:47.735853+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 925696 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:48.735980+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 917504 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:49.736109+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 917504 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:50.736257+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 909312 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:51.736381+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 909312 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:52.736490+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 901120 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:53.736612+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 901120 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:54.736787+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 892928 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:55.736985+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 884736 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:56.737098+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 884736 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:57.737230+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 876544 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:58.737340+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 876544 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:59.737469+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 876544 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:00.737604+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 868352 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:01.737709+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 868352 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:02.737817+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 860160 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:03.737942+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 860160 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:04.738411+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 843776 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:05.738971+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 843776 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:06.740135+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 843776 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:07.740365+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 835584 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:08.741849+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 835584 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:09.742551+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 819200 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:10.743479+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 819200 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:11.743840+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 819200 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:12.744057+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 811008 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:13.744179+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 811008 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:14.744330+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 811008 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:15.744529+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 802816 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:16.745120+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 802816 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:17.745246+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 794624 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:18.745360+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 794624 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:19.745505+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 786432 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:20.745735+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 786432 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:21.746131+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 786432 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:22.746382+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 778240 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:23.746522+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 778240 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:24.746708+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 770048 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:25.746867+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 770048 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:26.747120+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 770048 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:27.747286+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74760192 unmapped: 761856 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:28.747482+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74760192 unmapped: 761856 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:29.747627+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 753664 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:30.747830+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 753664 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:31.747952+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 745472 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:32.748125+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 745472 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:33.748270+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 745472 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:34.748430+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 737280 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:35.748597+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 737280 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:36.748792+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 729088 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:37.748997+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 729088 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:38.749154+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 720896 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:39.749309+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 720896 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:40.749481+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 720896 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:41.749689+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 712704 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:42.749871+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 712704 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:43.750114+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 704512 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:44.750436+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 704512 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:45.750652+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 696320 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:46.750964+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 696320 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:47.751156+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 696320 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:48.751309+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 688128 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:49.751474+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 688128 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:50.751651+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 688128 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:51.751857+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 679936 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:52.752109+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 679936 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:53.752355+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74850304 unmapped: 671744 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:54.752628+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74858496 unmapped: 663552 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:55.752848+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 655360 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:56.753038+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 655360 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:57.753238+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 655360 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:58.753482+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74874880 unmapped: 647168 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:59.753688+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 655360 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:00.753996+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74874880 unmapped: 647168 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:01.754241+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74874880 unmapped: 647168 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:02.754455+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74874880 unmapped: 647168 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:03.754630+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74883072 unmapped: 638976 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:04.754821+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 622592 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:05.754993+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 622592 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:06.755163+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 614400 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:07.755424+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 614400 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:08.755595+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 606208 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:09.756041+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 606208 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:10.756458+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 598016 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:11.756794+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 598016 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:12.757022+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 598016 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:13.757329+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 589824 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:14.757642+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 589824 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:15.758005+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 581632 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:16.758262+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 581632 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:17.758534+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 573440 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:18.758821+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 573440 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:19.759006+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 573440 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:20.759157+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 565248 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:21.759372+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 565248 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:22.759527+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 557056 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:23.759695+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 557056 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:24.760011+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 548864 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:25.760174+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 548864 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:26.760309+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 548864 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:27.760479+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 540672 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:28.760602+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 540672 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:29.760744+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 540672 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:30.760869+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 532480 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:31.760977+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 532480 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:32.761183+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 524288 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:33.761326+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 524288 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:34.761483+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 516096 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:35.761652+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 516096 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:36.761815+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 516096 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:37.761965+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 507904 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:38.762074+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 507904 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:39.762194+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 499712 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:40.762312+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 499712 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:41.762444+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 491520 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:42.762671+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 491520 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:43.762922+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 491520 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:44.763123+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 475136 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:45.763301+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 475136 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:46.763425+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 466944 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:47.763589+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 466944 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:48.763746+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 458752 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:49.763858+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 458752 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:50.763988+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 458752 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:51.764151+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 450560 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:52.764289+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 450560 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:53.764427+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 434176 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:54.764585+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 434176 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:55.764703+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 434176 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:56.764871+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 425984 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:57.765039+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 425984 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:58.765180+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 417792 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:59.765310+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 417792 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:00.765426+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 417792 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:01.765706+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 409600 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:02.765850+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 409600 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:03.765980+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 401408 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:04.766120+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 401408 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:05.766302+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 401408 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:06.766453+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75128832 unmapped: 393216 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:07.766569+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75128832 unmapped: 393216 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:08.766687+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 385024 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:09.766837+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Cumulative writes: 6505 writes, 27K keys, 6505 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6505 writes, 1119 syncs, 5.81 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6505 writes, 27K keys, 6505 commit groups, 1.0 writes per commit group, ingest: 19.27 MB, 0.03 MB/s
                                           Interval WAL: 6505 writes, 1119 syncs, 5.81 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.365       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.365       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.36              0.00         1    0.365       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.055       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.055       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.055       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.10              0.00         1    0.095       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.10              0.00         1    0.095       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.10              0.00         1    0.095       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75202560 unmapped: 319488 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:10.766950+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 311296 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:11.767102+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 311296 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:12.767244+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 303104 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:13.767412+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 303104 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:14.767647+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 303104 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:15.767885+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 294912 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:16.768142+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 294912 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:17.768357+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 294912 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:18.768576+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 286720 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:19.768743+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 278528 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:20.768957+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 278528 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:21.769118+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 278528 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:22.769279+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 270336 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:23.769458+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 270336 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:24.769647+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 270336 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:25.769787+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75259904 unmapped: 262144 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:26.769916+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75259904 unmapped: 262144 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:27.770025+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 253952 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:28.770129+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 245760 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:29.770263+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 237568 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:30.770384+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 237568 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:31.770518+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 237568 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:32.770632+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 229376 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:33.770744+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:34.770916+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 229376 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:35.771064+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 221184 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:36.771208+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 221184 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:37.771350+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 221184 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:38.771474+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 212992 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:39.771653+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 221184 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:40.771785+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 221184 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:41.771971+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 221184 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:42.772077+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 221184 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:43.772195+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 212992 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:44.772344+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 212992 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:45.772413+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 204800 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:46.772555+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 204800 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:47.772705+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75325440 unmapped: 196608 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:48.772840+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75325440 unmapped: 196608 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:49.772983+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 188416 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:50.773149+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 188416 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:51.773249+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 180224 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:52.773397+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 180224 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:53.773565+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 180224 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:54.773741+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 172032 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:55.773862+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 180224 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:56.774000+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 180224 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:57.774113+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 172032 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:58.774234+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 172032 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:59.774368+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 163840 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:00.774486+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 163840 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:01.774622+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 155648 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:02.774739+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 155648 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:03.774956+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 147456 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:04.775139+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 147456 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:05.775258+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 147456 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:06.775404+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 139264 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:07.775522+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 139264 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:08.775628+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 131072 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:09.775765+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 131072 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:10.775873+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 131072 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:11.775971+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 122880 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:12.776100+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 122880 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:13.776203+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 114688 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:14.776400+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 114688 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:15.776521+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 106496 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:16.776732+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 106496 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:17.776934+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 106496 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:18.777120+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 98304 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:19.777278+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 98304 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:20.777476+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 98304 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:21.777609+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 90112 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:22.777751+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 90112 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:23.777818+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 81920 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:24.778092+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 81920 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:25.778217+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 81920 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:26.778343+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 73728 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:27.778445+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 73728 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:28.778558+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 65536 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:29.778664+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 310.267303467s of 310.278106689s, submitted: 2
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 57344 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:30.778765+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 2023424 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:31.778893+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 2015232 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:32.779075+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 2015232 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:33.779216+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 2015232 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:34.779380+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 2023424 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:35.779532+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 2023424 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:36.779648+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 2023424 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:37.779799+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 2023424 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:38.779936+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 2015232 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:39.780047+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 2015232 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:40.780181+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 2015232 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:41.780336+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75612160 unmapped: 2007040 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:42.780449+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75612160 unmapped: 2007040 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:43.780626+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75620352 unmapped: 1998848 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:44.780792+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 1990656 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:45.780937+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 1990656 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:46.781164+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 1982464 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:47.781321+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 1982464 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:48.781463+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 1974272 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:49.781649+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 1974272 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:50.781808+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 1974272 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:51.781937+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75653120 unmapped: 1966080 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:52.782045+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75653120 unmapped: 1966080 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:53.782192+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75661312 unmapped: 1957888 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:54.782336+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75661312 unmapped: 1957888 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:55.782571+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75669504 unmapped: 1949696 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:56.782841+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75669504 unmapped: 1949696 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:57.782991+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75669504 unmapped: 1949696 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:58.783118+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 1941504 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:59.783269+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 1941504 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:00.783410+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 1933312 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:01.783556+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 1933312 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:02.783756+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 1933312 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:03.783883+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 1925120 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:04.784038+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 1908736 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:05.784164+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 1900544 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:06.784277+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 1900544 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:07.784418+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 1900544 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:08.784571+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75726848 unmapped: 1892352 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:09.784709+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75726848 unmapped: 1892352 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:10.784861+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:11.785040+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:12.785164+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:13.785309+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:14.785464+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:15.785613+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:16.785750+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:17.785876+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:18.785966+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:19.786086+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:20.786332+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:21.786565+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:22.786743+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:23.786884+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:24.787252+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:25.787451+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:26.787631+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:27.787759+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:28.787886+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:29.788005+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 1875968 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:30.788139+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 1875968 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:31.788270+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 1875968 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:32.788404+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 1875968 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:33.788588+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 1875968 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:34.788755+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 1875968 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:35.788891+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1867776 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:36.789127+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1867776 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:37.789276+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1867776 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:38.789396+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1867776 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:39.789508+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1867776 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:40.789636+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1867776 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:41.789757+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1867776 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:42.789944+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1867776 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:43.790063+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1867776 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:44.790208+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1867776 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:45.790333+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:46.790445+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:47.790644+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:48.790811+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:49.790940+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:50.791071+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:51.791185+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:52.791313+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:53.791439+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:54.791588+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:55.791709+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:56.791841+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:57.791955+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:58.792061+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:59.792180+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:00.792286+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:01.792393+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:02.792515+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:03.792661+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:04.792804+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 1843200 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:05.792967+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 1843200 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:06.793134+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 1843200 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:07.793286+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 1835008 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:08.793458+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 1835008 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:09.793580+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 1835008 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:10.793758+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 1835008 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:11.793917+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 1835008 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:12.794005+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 1835008 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:13.794132+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 1835008 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:14.794311+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 1826816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:15.794438+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 1826816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:16.794562+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 1826816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:17.794730+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 1826816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:18.794855+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 1826816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:19.795004+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 1826816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:20.795108+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 1826816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:21.795268+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 1826816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:22.795390+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 1826816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:23.795529+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 1826816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:24.795730+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:25.795965+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:26.796089+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:27.796213+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:28.796331+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:29.796455+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:30.796580+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:31.796813+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:32.796979+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:33.797097+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:34.797242+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:35.797352+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:36.797469+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:37.797603+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:38.797741+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:39.797844+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:40.797956+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:41.798061+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:42.798170+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:43.798294+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:44.798450+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:45.798568+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:46.798696+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:47.798822+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:48.798938+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:49.799063+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:50.800425+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:51.800565+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:52.800694+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:53.800824+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:54.800934+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:55.801086+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:56.801238+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:57.801362+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:58.801550+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:59.801695+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:00.801844+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:01.801992+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:02.802139+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:03.802281+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:04.802436+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:05.802624+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:06.802767+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:07.802892+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:08.803053+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:09.803251+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 1802240 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:10.803393+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 1802240 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:11.803589+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:12.803697+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:13.803834+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:14.804042+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:15.804176+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:16.804292+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:17.804410+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:18.804557+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:19.804680+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:20.804859+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:21.805100+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:22.805331+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:23.805482+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:24.805633+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:25.805750+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:26.805938+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:27.806095+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:28.806230+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:29.806406+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:30.807396+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:31.808050+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:32.808213+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:33.808353+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 1785856 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:34.808528+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 1785856 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:35.808707+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 1785856 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:36.808848+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 1785856 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:37.808940+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 1785856 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:38.809057+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 1785856 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:39.809180+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1777664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:40.809291+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1777664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:41.809428+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1777664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:42.809547+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1777664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:43.809612+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1777664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:44.809766+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1777664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:45.809892+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1777664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:46.810049+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1777664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:47.810208+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1777664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:48.810372+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1777664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:49.812490+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1769472 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:50.812608+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1769472 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:51.812775+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1769472 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:52.812913+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1769472 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:53.813067+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:54.813219+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:55.813360+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:56.813488+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:57.813601+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:58.813783+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:59.813936+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:00.814062+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:01.814190+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:02.814524+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:03.814633+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:04.814866+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1753088 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:05.814986+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1753088 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:06.815208+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1753088 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:07.815365+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1753088 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:08.815498+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1753088 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:09.815641+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1744896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:10.815768+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1744896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:11.815972+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1744896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:12.816084+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1744896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:13.816244+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1744896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:14.816488+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1744896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:15.816655+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1744896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:16.816777+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1744896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:17.816923+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1744896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:18.817039+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1744896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:19.817145+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 ms_handle_reset con 0x560b41ff1400 session 0x560b413a9860
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b41ff1800
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 ms_handle_reset con 0x560b42032000 session 0x560b41b383c0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42d4cc00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:20.817799+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:21.818604+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:22.818740+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:23.818927+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:24.819083+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:25.819201+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:26.819304+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:27.819450+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:28.819602+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:29.819711+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:30.819837+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:31.819975+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:32.820097+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:33.820298+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:34.820530+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 1728512 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:35.820628+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 1728512 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:36.820775+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 1728512 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:37.820938+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 1728512 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:38.821049+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:39.821247+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:40.821403+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:41.821562+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:42.821711+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:43.821873+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:44.822088+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:45.822239+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:46.822790+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:47.822952+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:48.823178+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:49.823321+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:50.823460+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:51.823608+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:52.823729+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:53.823841+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:54.823974+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:55.824088+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:56.824202+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:57.824320+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:58.824444+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:59.824608+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:00.824771+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:01.824926+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:02.825041+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:03.825145+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:04.825299+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:05.825402+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:06.825504+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:07.825619+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:08.825750+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:09.825953+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:10.826084+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:11.826194+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:12.826331+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:13.826463+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:14.826603+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:15.826724+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:16.826880+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:17.827008+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:18.827126+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:19.827244+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:20.827379+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:21.827520+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:22.827664+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:23.827827+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:24.827960+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:25.828075+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:26.828176+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:27.828288+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:28.828394+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:29.828505+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:30.828602+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:31.828717+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:32.828877+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:33.828979+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:34.829153+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:35.829261+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:36.829388+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:37.829507+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:38.829618+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:39.829726+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:40.829864+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:41.829977+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:42.830081+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:43.830188+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:44.830436+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:45.830546+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:46.830654+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:47.830760+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:48.830888+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:49.830962+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:50.831093+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:51.831202+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:52.831309+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:53.831402+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:54.831559+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1671168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:55.831721+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1671168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:56.831847+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1671168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:57.831982+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1671168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:58.832115+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1671168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:59.832237+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1671168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:00.832856+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1671168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:01.832981+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1671168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:02.833093+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1671168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:03.833240+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1671168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:04.833422+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:05.833575+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:06.833764+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:07.833930+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:08.834093+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:09.834229+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:10.834367+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:11.834584+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:12.834684+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:13.834830+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:14.834977+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:15.835084+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:16.835189+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:17.835321+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:18.835429+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:19.835625+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:20.835783+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:21.836010+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:22.836128+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:23.836235+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:24.836391+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:25.836530+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:26.836639+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:27.836750+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:28.836868+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:29.836983+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:30.837118+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:31.837215+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:32.837350+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:33.837497+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:34.837649+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:35.838004+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:36.838111+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:37.838216+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:38.838363+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:39.838521+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:40.838654+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:41.838766+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:42.838873+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:43.838953+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:44.839118+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:45.839379+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:46.839523+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:47.839626+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:48.839801+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:49.839987+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:50.840505+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 1654784 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:51.840710+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 1654784 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:52.840883+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 1654784 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:53.841013+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 1654784 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:54.841330+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1646592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:55.841520+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1646592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:56.841658+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1646592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:57.841782+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1646592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:58.841944+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1646592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:59.842097+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1646592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:00.842291+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1646592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:01.842440+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1646592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:02.842586+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1646592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:03.842727+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1646592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:04.843090+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:05.872605+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:06.872733+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:07.872841+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:08.872980+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:09.873211+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:10.873365+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:11.873503+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:12.873662+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:13.873790+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:14.874022+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:15.874176+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:16.874299+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:17.874445+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:18.874520+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:19.874652+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:20.874777+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:21.874874+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:22.875010+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:23.875129+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:24.875275+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:25.875427+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:26.875553+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Nov 24 18:59:47 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2328258631' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:27.875676+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:28.875818+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:29.875942+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:30.876062+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:31.876180+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:32.881251+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:33.881391+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:34.881557+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:35.881723+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:36.881843+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:37.881994+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:38.882133+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:39.882252+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:40.882372+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:41.882545+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:42.882732+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:43.882852+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 1622016 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:44.883038+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 1622016 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:45.883173+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 1622016 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:46.883332+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 1622016 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:47.883467+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 1622016 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:48.884084+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 1622016 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:49.884326+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:50.884468+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:51.884821+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:52.885038+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:53.885181+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:54.885647+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:55.885849+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:56.885965+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:57.886102+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:58.886240+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:59.886367+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:00.886626+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:01.886767+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:02.886942+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:03.887086+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:04.887237+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:05.887365+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:06.887534+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:07.887716+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:08.887956+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:09.888107+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:10.888209+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:11.888466+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:12.888609+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:13.888730+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:14.888864+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:15.889042+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:16.889159+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:17.889295+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:18.889466+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:19.889581+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:20.889709+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:21.889946+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:22.890147+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:23.890278+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:24.890427+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:25.890536+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:26.890661+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:27.890788+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:28.890958+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:29.891077+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:30.891193+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:31.891307+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 1605632 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:32.891422+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 1605632 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:33.891537+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:34.891699+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:35.891926+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:36.892119+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:37.892284+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:38.893111+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:39.893289+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:40.893436+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:41.893561+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:42.894159+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:43.894482+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:44.894721+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:45.894925+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:46.895047+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:47.895181+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:48.895341+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:49.895469+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:50.895580+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:51.895721+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:52.895860+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:53.895964+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:54.896100+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:55.896266+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:56.896395+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:57.896552+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:58.896683+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:59.896784+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:00.896957+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:01.897064+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:02.897195+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:03.897302+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:04.897441+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 1589248 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:05.897570+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 1589248 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:06.897685+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 1589248 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:07.897794+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 1589248 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:08.897948+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 1589248 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:09.898120+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Cumulative writes: 6685 writes, 27K keys, 6685 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 6685 writes, 1209 syncs, 5.53 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 271 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.365       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.365       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.36              0.00         1    0.365       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.055       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.055       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.055       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.10              0.00         1    0.095       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.10              0.00         1    0.095       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.10              0.00         1    0.095       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 1556480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:10.898318+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 1556480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:11.898419+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 1556480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:12.899523+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 1556480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:13.899682+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 1556480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:14.899950+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 1556480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:15.900072+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 1556480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:16.900198+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76070912 unmapped: 1548288 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:17.900317+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76070912 unmapped: 1548288 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:18.900604+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:19.900743+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:20.900857+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:21.900939+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:22.901058+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:23.901168+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:24.901301+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:25.901446+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:26.901562+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:27.901686+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:28.901805+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:29.901964+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:30.902088+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:31.902253+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:32.902398+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:33.902491+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:34.902704+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:35.902800+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:36.902884+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:37.903006+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:38.903113+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:39.903234+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:40.903364+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:41.903516+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:42.903619+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:43.903738+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:44.903940+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:45.904063+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:46.904185+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:47.904298+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:48.904445+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:49.904573+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:50.904695+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:51.904819+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:52.904974+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:53.905094+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:54.905359+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:55.905479+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:56.905555+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:57.905680+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:58.905808+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:59.905937+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:00.906091+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:01.906219+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:02.906364+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:03.906539+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:04.906711+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 1507328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:05.906851+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 1507328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:06.906999+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 1507328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:07.907126+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:08.907255+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:09.907358+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:10.907475+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:11.907601+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:12.907731+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:13.907847+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:14.908015+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:15.908213+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:16.908347+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:17.908519+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:18.908627+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:19.908734+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:20.908851+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:21.908971+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:22.909130+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:23.909312+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:24.909474+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:25.909597+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:26.909721+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:27.909857+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:28.958821+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:29.958978+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 599.852172852s of 600.152648926s, submitted: 90
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:30.959100+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 1490944 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:31.959220+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:32.959343+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:33.959500+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:34.959723+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:35.959870+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:36.959983+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:37.960125+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:38.960289+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:39.960416+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:40.960555+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:41.960684+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:42.960839+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:43.960963+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:44.961172+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:45.961359+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:46.961520+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:47.961681+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:48.961804+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:49.961996+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:50.962150+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:51.962266+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:52.962426+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:53.962528+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:54.962717+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:55.962863+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:56.963033+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:57.963189+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:58.963327+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:59.963473+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:00.963735+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:01.963888+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:02.964329+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:03.964966+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:04.965109+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:05.965220+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:06.965383+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:07.965557+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:08.965702+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:09.965820+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:10.965976+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:11.966117+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:12.966240+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:13.966362+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:14.966504+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:15.966626+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:16.966736+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:17.966840+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:18.966955+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:19.967080+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:20.967221+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:21.967391+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:22.967556+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:23.967741+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:24.967947+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:25.968086+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:26.968213+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:27.968346+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:28.968502+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:29.968663+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:30.968808+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:31.968954+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:32.969121+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:33.969272+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:34.969448+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:35.969576+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:36.969728+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:37.969854+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:38.970076+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:39.970235+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:40.970395+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:41.970510+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:42.970671+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:43.970837+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:44.971017+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:45.971114+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:46.971252+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:47.971428+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:48.971573+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:49.971749+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:50.971885+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:51.972084+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:52.972255+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:53.972380+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:54.972591+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:55.972734+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:56.972869+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:57.972980+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:58.973083+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:59.973246+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:00.973421+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:01.973584+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:02.973713+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:03.973857+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:04.974078+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:05.974292+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:06.974701+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:07.975109+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:08.975448+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:09.975717+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:10.975892+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:11.976069+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:12.976307+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:13.976520+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:14.976779+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:15.976991+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:16.977165+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:17.977377+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:18.977559+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:19.977783+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:20.977978+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:21.978120+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:22.978300+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:23.978494+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:24.978667+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:25.978833+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:26.978957+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:27.979108+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:28.979270+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:29.979430+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:30.979600+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:31.979757+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:32.979984+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:33.980117+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:34.980321+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:35.980523+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:36.980747+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:37.980952+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:38.982839+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:39.984272+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:40.985413+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:41.986213+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:42.986755+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:43.987031+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:44.987190+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:45.988359+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:46.989413+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:47.990148+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:48.991034+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:49.991780+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:50.992491+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:51.993195+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:52.993788+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:53.994331+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:54.994873+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 1466368 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:55.995026+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 1466368 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:56.995283+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 1466368 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:57.995421+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 1458176 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:58.995675+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 1458176 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:59.995830+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 1458176 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:00.995976+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 1458176 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:01.996110+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 1458176 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:02.996269+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 1458176 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:03.996508+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 1458176 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:04.996662+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:05.996964+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:06.997235+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:07.997479+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:08.997603+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:09.997772+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:10.997958+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:11.998116+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:12.998519+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:13.998739+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:14.999386+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:15.999780+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:17.000299+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:18.000791+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:19.001009+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:20.001176+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:21.001343+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:22.001475+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:23.001969+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:24.002164+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:25.002579+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:26.002806+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:27.002937+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:28.003233+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:29.003344+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:30.003495+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:31.003704+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:32.003837+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:33.003980+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:34.004098+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:35.004331+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:36.004544+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:37.004723+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:38.004863+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:39.005049+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:40.005190+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:41.005335+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:42.005681+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:43.005832+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:44.005984+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:45.006363+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:46.007112+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:47.007276+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:48.007548+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:49.007713+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:50.007847+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:51.008003+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:52.008247+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:53.008501+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:54.008726+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:55.008993+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:56.009131+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:57.009401+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:58.009627+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:59.009802+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:00.009956+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:01.010159+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:02.010435+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:03.010696+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:04.010982+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:05.011248+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:06.011433+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:07.011590+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:08.011720+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:09.011933+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:10.012061+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:11.012172+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:12.012299+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:13.012428+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:14.012595+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:15.012798+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:16.012960+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:17.013121+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:18.013267+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:19.013470+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:20.013644+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:21.013865+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:22.014026+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:23.014201+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:24.014363+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:25.014498+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:26.014634+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:27.014810+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:28.014953+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:29.015137+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:30.015311+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:31.015429+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:32.015537+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:33.015637+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:34.015806+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:35.015977+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:36.016138+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:37.016291+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:38.016441+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:39.016608+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:40.016861+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:41.017076+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:42.017228+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:43.017407+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:44.017527+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:45.017676+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:46.017859+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:47.018029+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:48.018159+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:49.018334+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:50.018483+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:51.018657+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:52.018799+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:53.018975+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:54.019127+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:55.019305+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:56.019486+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:57.019727+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:58.019936+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:59.020153+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:00.020309+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:01.020490+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:02.020618+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:03.020779+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:04.020922+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:05.021075+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:06.021216+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:07.021379+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:08.021547+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:09.021729+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:10.021868+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:11.022003+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:12.022158+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:13.022380+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:14.022511+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:15.022752+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:16.022928+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:17.023047+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:18.023187+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:19.023326+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:20.023483+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:21.024415+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:22.026589+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:23.026870+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:24.027275+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:25.028584+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:26.029200+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:27.029350+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:28.029765+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:29.030067+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:30.030283+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:31.030424+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:32.030596+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:33.030751+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:34.031187+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:35.031394+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:36.031621+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:37.031768+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:38.032118+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:39.032322+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:40.032648+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:41.032965+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:42.033253+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:43.033495+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:44.033742+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:45.033967+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:46.034174+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:47.034374+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:48.034590+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:49.034758+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:50.034928+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:51.035088+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:52.035243+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:53.035418+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:54.035547+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:55.035730+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:56.035847+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:57.036016+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:58.036187+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:59.036357+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:00.036497+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:01.036615+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:02.036777+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:03.036964+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:04.037158+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:05.037338+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:06.037479+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:07.037598+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:08.037806+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:09.037985+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:10.038191+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:11.038405+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:12.038579+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:13.038740+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:14.039082+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:15.039234+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:16.039390+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:17.039659+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:18.039854+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:19.039989+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:20.040135+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:21.040308+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:22.040425+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:23.040553+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:24.040668+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:25.040845+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:26.041021+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:27.041350+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:28.041489+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:29.042956+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:30.044101+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:31.044983+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:32.045954+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:33.046210+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:34.046379+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:35.046660+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:36.047193+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:37.047319+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:38.047498+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:39.047748+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:40.048000+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:41.048125+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:42.048461+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:43.048782+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43e19800
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:44.048993+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 handle_osd_map epochs [120,121], i have 120, src has [1,121]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 373.959564209s of 374.288970947s, submitted: 90
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 120 handle_osd_map epochs [121,121], i have 121, src has [1,121]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1351680 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:45.049284+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 121 handle_osd_map epochs [122,122], i have 121, src has [1,122]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 1277952 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:46.049534+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 860549 data_alloc: 218103808 data_used: 184320
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca47000/0x0/0x4ffc00000, data 0x11e601/0x1d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 1277952 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:47.049703+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 1277952 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 122 handle_osd_map epochs [122,123], i have 122, src has [1,123]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 122 handle_osd_map epochs [123,123], i have 123, src has [1,123]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 123 ms_handle_reset con 0x560b43e19800 session 0x560b44f00d20
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:48.049958+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1261568 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42033000
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:49.050084+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 123 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x12019a/0x1d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 17907712 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 124 ms_handle_reset con 0x560b42033000 session 0x560b44f1d0e0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:50.050276+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:51.050491+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978265 data_alloc: 218103808 data_used: 188416
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:52.050703+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fba3f000/0x0/0x4ffc00000, data 0x1121d56/0x11dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:53.051001+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fba3f000/0x0/0x4ffc00000, data 0x1121d56/0x11dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:54.051146+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:55.051347+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:56.051535+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980391 data_alloc: 218103808 data_used: 188416
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:57.051660+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba3d000/0x0/0x4ffc00000, data 0x11237b9/0x11e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:58.051821+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:59.051985+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:00.052198+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:01.052435+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980391 data_alloc: 218103808 data_used: 188416
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba3d000/0x0/0x4ffc00000, data 0x11237b9/0x11e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:02.052640+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba3d000/0x0/0x4ffc00000, data 0x11237b9/0x11e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:03.052851+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba3d000/0x0/0x4ffc00000, data 0x11237b9/0x11e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:04.053016+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:05.053232+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:06.053377+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980391 data_alloc: 218103808 data_used: 188416
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:07.053587+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba3d000/0x0/0x4ffc00000, data 0x11237b9/0x11e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:08.053774+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:09.053979+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba3d000/0x0/0x4ffc00000, data 0x11237b9/0x11e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:10.054119+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:11.054274+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980391 data_alloc: 218103808 data_used: 188416
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba3d000/0x0/0x4ffc00000, data 0x11237b9/0x11e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:12.054467+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:13.054601+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:14.054761+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba3d000/0x0/0x4ffc00000, data 0x11237b9/0x11e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:15.054969+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:16.055165+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980391 data_alloc: 218103808 data_used: 188416
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:17.055338+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba3d000/0x0/0x4ffc00000, data 0x11237b9/0x11e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:18.055577+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba3d000/0x0/0x4ffc00000, data 0x11237b9/0x11e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:19.055766+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:20.055892+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba3d000/0x0/0x4ffc00000, data 0x11237b9/0x11e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:21.056052+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42033400
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.974910736s of 36.823040009s, submitted: 49
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984850 data_alloc: 218103808 data_used: 188416
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 17891328 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:22.056199+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 125 handle_osd_map epochs [125,126], i have 125, src has [1,126]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 126 ms_handle_reset con 0x560b42033400 session 0x560b44f1dc20
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fba3c000/0x0/0x4ffc00000, data 0x1123fc9/0x11e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 16834560 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fba37000/0x0/0x4ffc00000, data 0x1125b69/0x11e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:23.056345+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b41ff1c00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 16818176 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:24.056524+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 127 ms_handle_reset con 0x560b41ff1c00 session 0x560b44f30f00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 15769600 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:25.056737+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fba37000/0x0/0x4ffc00000, data 0x1126f07/0x11e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 15769600 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:26.056881+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42033000
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994861 data_alloc: 218103808 data_used: 196608
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 15753216 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:27.057102+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 128 ms_handle_reset con 0x560b42033000 session 0x560b44f00d20
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 15728640 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:28.057294+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 15728640 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:29.057441+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42d4dc00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 15720448 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:30.057561+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43e19800
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 87212032 unmapped: 15589376 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 128 heartbeat osd_stat(store_statfs(0x4faa31000/0x0/0x4ffc00000, data 0x2129478/0x21ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:31.057785+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 128 handle_osd_map epochs [128,129], i have 128, src has [1,129]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.644784927s of 10.040460587s, submitted: 85
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 129 ms_handle_reset con 0x560b43e19800 session 0x560b44e890e0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223427 data_alloc: 218103808 data_used: 208896
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43cae000
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 23822336 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:32.057977+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43cb7000
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 130 ms_handle_reset con 0x560b42d4dc00 session 0x560b4295e3c0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 130 ms_handle_reset con 0x560b43cae000 session 0x560b44e892c0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b41ff1c00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 23724032 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 130 handle_osd_map epochs [130,131], i have 130, src has [1,131]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 131 ms_handle_reset con 0x560b43cb7000 session 0x560b44e890e0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 131 ms_handle_reset con 0x560b41ff1c00 session 0x560b439f4780
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:33.058142+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42033000
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42d4dc00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43e19800
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 131 ms_handle_reset con 0x560b43e19800 session 0x560b44f1dc20
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43e2b000
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 131 ms_handle_reset con 0x560b43e2b000 session 0x560b41f812c0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 22773760 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:34.058257+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 132 ms_handle_reset con 0x560b42d4dc00 session 0x560b44d8c000
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42d4dc00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 132 ms_handle_reset con 0x560b42033000 session 0x560b4221d4a0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b41ff1c00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 22708224 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:35.058502+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43cb7000
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb21d000/0x0/0x4ffc00000, data 0x1131362/0x11fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 133 ms_handle_reset con 0x560b41ff1c00 session 0x560b439fc3c0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 133 ms_handle_reset con 0x560b42d4dc00 session 0x560b44d8d680
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 133 ms_handle_reset con 0x560b43cb7000 session 0x560b44f1c960
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43e19800
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 133 ms_handle_reset con 0x560b43e19800 session 0x560b44569c20
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b41ff1c00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42033000
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 133 ms_handle_reset con 0x560b42033000 session 0x560b4295e3c0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 22740992 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:36.058645+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 134 ms_handle_reset con 0x560b41ff1c00 session 0x560b41f8e3c0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43cb7000
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041733 data_alloc: 218103808 data_used: 241664
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 22732800 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:37.059161+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 135 ms_handle_reset con 0x560b43cb7000 session 0x560b420e7c20
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 22700032 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43e19800
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:38.059400+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 136 ms_handle_reset con 0x560b43e19800 session 0x560b4221cb40
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fba12000/0x0/0x4ffc00000, data 0x1137add/0x1208000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 22618112 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:39.059633+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43e2b000
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43ae1c00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 22609920 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:40.059816+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 137 ms_handle_reset con 0x560b43e2b000 session 0x560b44ce52c0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 137 ms_handle_reset con 0x560b43ae1c00 session 0x560b44568960
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 80224256 unmapped: 22577152 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:41.060012+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b41ff1c00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.287572861s of 10.136335373s, submitted: 244
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051198 data_alloc: 218103808 data_used: 249856
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 80224256 unmapped: 22577152 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:42.060126+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 138 ms_handle_reset con 0x560b41ff1c00 session 0x560b44d525a0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fba0f000/0x0/0x4ffc00000, data 0x113b2dd/0x120d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 22519808 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42033000
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:43.060267+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 139 handle_osd_map epochs [139,140], i have 139, src has [1,140]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 21413888 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:44.060381+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 140 ms_handle_reset con 0x560b42033000 session 0x560b446f10e0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43cb7000
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 21372928 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fba08000/0x0/0x4ffc00000, data 0x113e1d0/0x1212000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:45.060549+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 141 ms_handle_reset con 0x560b43cb7000 session 0x560b44d8c3c0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43e19800
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 141 ms_handle_reset con 0x560b43e19800 session 0x560b4214bc20
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b41ff1c00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81510400 unmapped: 21291008 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42033000
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43ae1c00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:46.060701+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 142 ms_handle_reset con 0x560b43ae1c00 session 0x560b4463ef00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063961 data_alloc: 218103808 data_used: 266240
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43cb7000
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81559552 unmapped: 21241856 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:47.061176+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fba04000/0x0/0x4ffc00000, data 0x1141b0a/0x1217000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 143 ms_handle_reset con 0x560b42033000 session 0x560b44d8c000
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 143 ms_handle_reset con 0x560b41ff1c00 session 0x560b44f31c20
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43e21800
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 143 ms_handle_reset con 0x560b43e21800 session 0x560b44f314a0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 143 ms_handle_reset con 0x560b43cb7000 session 0x560b4463f4a0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 21209088 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:48.061395+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 21159936 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:49.061559+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 21159936 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:50.061760+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 21159936 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:51.062007+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1071603 data_alloc: 218103808 data_used: 274432
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 21143552 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:52.062263+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 21143552 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:53.062396+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb5f0000/0x0/0x4ffc00000, data 0x1144a42/0x121d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.284852028s of 11.915773392s, submitted: 186
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 21143552 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:54.062534+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 21143552 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:55.062787+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b41ff1c00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 145 ms_handle_reset con 0x560b41ff1c00 session 0x560b44d8c5a0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42033000
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 145 ms_handle_reset con 0x560b42033000 session 0x560b44ce4960
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 21143552 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:56.063016+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43ae1c00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 145 ms_handle_reset con 0x560b43ae1c00 session 0x560b447c4780
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43e21800
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074577 data_alloc: 218103808 data_used: 274432
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fb5ed000/0x0/0x4ffc00000, data 0x114653d/0x1220000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 21135360 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:57.063230+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fb5ed000/0x0/0x4ffc00000, data 0x114653d/0x1220000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81690624 unmapped: 21110784 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:58.063442+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 147 ms_handle_reset con 0x560b43e21800 session 0x560b421bcf00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 21094400 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:59.063577+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b428f6000
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43cafc00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81715200 unmapped: 21086208 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:00.063764+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb5e6000/0x0/0x4ffc00000, data 0x1149c8b/0x1226000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 21069824 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:01.064008+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42e88400
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082986 data_alloc: 218103808 data_used: 278528
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 21069824 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:02.064164+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42e88800
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 148 ms_handle_reset con 0x560b42e88800 session 0x560b44f1d0e0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 148 ms_handle_reset con 0x560b42e88400 session 0x560b4463fa40
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 21053440 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Nov 24 18:59:47 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4250234144' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:03.064397+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b41ff1c00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.870351791s of 10.022338867s, submitted: 58
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 149 ms_handle_reset con 0x560b41ff1c00 session 0x560b4463fe00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42033000
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81772544 unmapped: 21028864 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 149 ms_handle_reset con 0x560b42033000 session 0x560b44df10e0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:04.064521+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43ae1c00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 149 ms_handle_reset con 0x560b43ae1c00 session 0x560b44ef4000
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fb5e0000/0x0/0x4ffc00000, data 0x114d3f7/0x122d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 149 handle_osd_map epochs [150,150], i have 150, src has [1,150]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 21061632 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:05.064693+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43e21800
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb5e0000/0x0/0x4ffc00000, data 0x114d3f7/0x122d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 21045248 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:06.064821+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 151 ms_handle_reset con 0x560b43e21800 session 0x560b44ef5860
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095676 data_alloc: 218103808 data_used: 286720
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b41ff1c00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 19996672 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5dd000/0x0/0x4ffc00000, data 0x1150b87/0x1231000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:07.065359+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 152 ms_handle_reset con 0x560b41ff1c00 session 0x560b44f1ed20
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82821120 unmapped: 19980288 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:08.066126+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fb5d8000/0x0/0x4ffc00000, data 0x1152b3c/0x1235000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82821120 unmapped: 19980288 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:09.066580+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 19963904 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:10.066724+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 19963904 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fb5d8000/0x0/0x4ffc00000, data 0x1152b3c/0x1235000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:11.067015+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103151 data_alloc: 218103808 data_used: 294912
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 19963904 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:12.067166+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42033000
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 152 ms_handle_reset con 0x560b42033000 session 0x560b44df0b40
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42e88400
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 152 ms_handle_reset con 0x560b42e88400 session 0x560b4295e3c0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43ae1c00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 152 ms_handle_reset con 0x560b43ae1c00 session 0x560b44d8cd20
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b406a1c00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 152 ms_handle_reset con 0x560b406a1c00 session 0x560b44d8cf00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b41ff1c00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42033000
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 153 ms_handle_reset con 0x560b42033000 session 0x560b41f834a0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 153 ms_handle_reset con 0x560b41ff1c00 session 0x560b44f31e00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42e88400
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 19939328 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 153 ms_handle_reset con 0x560b42e88400 session 0x560b44df10e0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43ae1c00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:13.067298+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 153 ms_handle_reset con 0x560b43ae1c00 session 0x560b44ef5860
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42d4dc00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 153 ms_handle_reset con 0x560b42d4dc00 session 0x560b4545c000
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b41ff1c00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 153 ms_handle_reset con 0x560b41ff1c00 session 0x560b4545c780
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fb5d8000/0x0/0x4ffc00000, data 0x1152b3c/0x1235000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 20160512 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:14.067548+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42033000
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.832967758s of 11.620203972s, submitted: 124
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 19972096 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fb5d5000/0x0/0x4ffc00000, data 0x11545d7/0x1238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:15.067715+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 19972096 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:16.067966+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1107738 data_alloc: 218103808 data_used: 299008
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 19972096 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:17.068141+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42e88400
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 153 ms_handle_reset con 0x560b42e88400 session 0x560b44f1c3c0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43ae1c00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 154 ms_handle_reset con 0x560b43ae1c00 session 0x560b447c4f00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43cba800
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43b03400
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 19963904 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:18.068252+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 154 ms_handle_reset con 0x560b43cba800 session 0x560b44755e00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 154 ms_handle_reset con 0x560b43b03400 session 0x560b421bcf00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43b03400
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 154 ms_handle_reset con 0x560b43b03400 session 0x560b44f30f00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b41ff1c00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fb5d6000/0x0/0x4ffc00000, data 0x11545d7/0x1238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 155 ms_handle_reset con 0x560b41ff1c00 session 0x560b44df03c0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43ae1400
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 19931136 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:19.068370+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 156 ms_handle_reset con 0x560b43ae1400 session 0x560b42803c20
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 19898368 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:20.068480+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43ae1c00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 156 ms_handle_reset con 0x560b43ae1c00 session 0x560b44d8de00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43cba800
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 156 ms_handle_reset con 0x560b43cba800 session 0x560b44d8c780
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82894848 unmapped: 19906560 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b41ff1c00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:21.068576+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 156 ms_handle_reset con 0x560b41ff1c00 session 0x560b42cebe00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb5ca000/0x0/0x4ffc00000, data 0x11598ce/0x1242000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120481 data_alloc: 218103808 data_used: 315392
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82894848 unmapped: 19906560 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:22.068725+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43ae1400
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 156 ms_handle_reset con 0x560b43ae1400 session 0x560b44ef4960
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82919424 unmapped: 19881984 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:23.068852+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43ae1c00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb5cc000/0x0/0x4ffc00000, data 0x11598ce/0x1242000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [0,0,0,0,1])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 18833408 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:24.068993+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 157 ms_handle_reset con 0x560b43ae1c00 session 0x560b44f1c000
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:25.069188+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82935808 unmapped: 19865600 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.018726349s of 10.783482552s, submitted: 110
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 157 ms_handle_reset con 0x560b42033000 session 0x560b4545d2c0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43b03400
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:26.069299+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 19849216 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125112 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _renew_subs
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:27.069417+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82960384 unmapped: 19841024 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 158 ms_handle_reset con 0x560b43b03400 session 0x560b4545de00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 158 handle_osd_map epochs [158,159], i have 158, src has [1,159]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:28.069550+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82968576 unmapped: 19832832 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 159 heartbeat osd_stat(store_statfs(0x4fb5c7000/0x0/0x4ffc00000, data 0x115d098/0x1247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:29.069685+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82984960 unmapped: 19816448 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:30.069968+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82984960 unmapped: 19816448 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 159 ms_handle_reset con 0x560b428f6000 session 0x560b44d8d2c0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 159 ms_handle_reset con 0x560b43cafc00 session 0x560b4221d4a0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:31.070121+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82984960 unmapped: 19816448 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b41ff1c00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 159 ms_handle_reset con 0x560b41ff1c00 session 0x560b41f82d20
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128896 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 159 heartbeat osd_stat(store_statfs(0x4fb5c4000/0x0/0x4ffc00000, data 0x115eb17/0x124a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:32.070341+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 19791872 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b42033000
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 159 ms_handle_reset con 0x560b42033000 session 0x560b445cc960
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b43ae1400
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 159 ms_handle_reset con 0x560b43ae1400 session 0x560b4281fc20
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:33.070487+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 19791872 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:34.070677+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83025920 unmapped: 19775488 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5c2000/0x0/0x4ffc00000, data 0x11606ed/0x124b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:35.070976+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83025920 unmapped: 19775488 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5c2000/0x0/0x4ffc00000, data 0x11606ed/0x124b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:36.071192+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83025920 unmapped: 19775488 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1131316 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:37.071366+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83025920 unmapped: 19775488 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 160 handle_osd_map epochs [160,161], i have 160, src has [1,161]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.780132294s of 12.234910965s, submitted: 104
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:38.071536+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:39.071724+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:40.071865+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:41.072066+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:42.072172+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:43.072325+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:44.072495+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:45.072679+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:46.072863+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:47.072985+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:48.073171+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:49.073330+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:50.073491+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:51.073617+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:52.073775+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:53.073940+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:54.074097+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:55.074306+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:56.074459+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:57.074625+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:58.074783+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:59.074963+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:00.075187+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:01.075353+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:02.075541+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:03.075697+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:04.075876+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:05.076083+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:06.076216+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:07.076363+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:08.076500+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:09.076655+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:10.076884+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1801.0 total, 600.0 interval
                                           Cumulative writes: 8591 writes, 32K keys, 8591 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 8591 writes, 2012 syncs, 4.27 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1906 writes, 4972 keys, 1906 commit groups, 1.0 writes per commit group, ingest: 2.41 MB, 0.00 MB/s
                                           Interval WAL: 1906 writes, 803 syncs, 2.37 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:11.077023+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:12.077165+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:13.077300+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:14.077462+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:15.077659+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:16.077953+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:17.078063+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:18.078202+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:19.078317+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: mgrc ms_handle_reset ms_handle_reset con 0x560b41415c00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/536471675
Nov 24 18:59:47 compute-0 ceph-osd[89581]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/536471675,v1:192.168.122.100:6801/536471675]
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: get_auth_request con 0x560b43ae1400 auth_method 0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: mgrc handle_mgr_configure stats_period=5
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 ms_handle_reset con 0x560b41ff0800 session 0x560b413a9680
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b452a2000
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:20.078496+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 ms_handle_reset con 0x560b41ff1800 session 0x560b44d52f00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b452a2400
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 ms_handle_reset con 0x560b42d4cc00 session 0x560b4278c000
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b452a2800
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:21.078796+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:22.078953+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:23.079094+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:24.079215+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:25.079367+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:26.079564+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:27.079721+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:28.079964+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:29.080165+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:30.080363+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:31.080508+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:32.080664+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:33.080846+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:34.081057+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:35.081332+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:36.081523+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:37.081663+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:38.082129+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:39.082313+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:40.082502+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:41.082645+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:42.082831+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:43.082955+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:44.083147+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:45.083369+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:46.083513+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:47.083655+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:48.083794+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:49.084076+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:50.084261+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:51.084393+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:52.084557+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:53.084676+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:54.084800+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:55.085213+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:56.085386+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:57.085529+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:58.085681+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:59.085855+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:00.086011+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:01.086198+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:02.086333+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:03.086465+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:04.086649+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:05.086848+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:06.086994+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:07.087125+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:08.087271+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:09.087406+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 19578880 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:10.087569+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 19578880 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:11.087689+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 19578880 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:12.087856+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 19578880 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:13.088038+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 19578880 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:14.088150+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 19578880 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:15.088304+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 19578880 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:16.088914+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 19578880 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:17.089039+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 19578880 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:18.089182+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 19578880 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:19.089363+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 19578880 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:20.089536+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 19578880 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:21.089722+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 19456000 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: do_command 'config diff' '{prefix=config diff}'
Nov 24 18:59:47 compute-0 ceph-osd[89581]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 24 18:59:47 compute-0 ceph-osd[89581]: do_command 'config show' '{prefix=config show}'
Nov 24 18:59:47 compute-0 ceph-osd[89581]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 24 18:59:47 compute-0 ceph-osd[89581]: do_command 'counter dump' '{prefix=counter dump}'
Nov 24 18:59:47 compute-0 ceph-osd[89581]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 24 18:59:47 compute-0 ceph-osd[89581]: do_command 'counter schema' '{prefix=counter schema}'
Nov 24 18:59:47 compute-0 ceph-osd[89581]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:22.089954+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 ms_handle_reset con 0x560b42d4c400 session 0x560b43a1b860
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: handle_auth_request added challenge on 0x560b452a2c00
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83877888 unmapped: 18923520 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:23.090119+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 18767872 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:24.090596+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: do_command 'log dump' '{prefix=log dump}'
Nov 24 18:59:47 compute-0 ceph-osd[89581]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 18759680 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: do_command 'perf dump' '{prefix=perf dump}'
Nov 24 18:59:47 compute-0 ceph-osd[89581]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Nov 24 18:59:47 compute-0 ceph-osd[89581]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Nov 24 18:59:47 compute-0 ceph-osd[89581]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: do_command 'perf schema' '{prefix=perf schema}'
Nov 24 18:59:47 compute-0 ceph-osd[89581]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:25.090861+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 18538496 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:26.091029+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 18538496 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:27.091190+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 18538496 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:28.091356+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 18538496 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:29.091704+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 18538496 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:30.091911+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 112.251235962s of 112.263244629s, submitted: 53
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 18522112 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:31.092097+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18415616 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [0,0,1])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:32.092313+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:33.092521+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:34.092699+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:35.092968+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:36.093123+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:37.093296+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:38.093466+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:39.093595+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:40.093760+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:41.093932+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:42.094049+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:43.094186+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:44.094323+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:45.094522+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:46.094658+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:47.094839+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:48.094958+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:49.095103+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:50.095249+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:51.095431+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:52.095687+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:53.095923+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:54.096082+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:55.096553+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:56.096692+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:57.096928+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:58.097081+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:59.097335+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:00.097497+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:01.097720+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:02.097925+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:03.098119+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:04.098299+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:05.098471+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:06.098604+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:07.098732+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:08.098871+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:09.098953+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:10.099089+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:11.099305+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:12.099457+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:13.099613+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:14.099805+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:15.099972+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:16.100142+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:17.100305+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:18.100463+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:19.100602+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:20.100733+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:21.100882+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:22.101055+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:23.101206+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:24.101357+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:25.101535+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:26.101653+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:27.101819+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:28.101977+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:29.102167+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:30.102355+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:31.102503+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:32.102629+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:33.102775+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:34.102953+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:35.103116+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:36.103235+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:37.103391+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:38.103514+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:39.103718+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:40.103863+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:41.104001+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:42.104136+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:43.104293+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:44.104463+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:45.104647+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:46.104787+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:47.104989+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:48.105127+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:49.105251+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:50.105379+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:51.105571+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:52.105754+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:53.105993+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:54.106191+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:55.106378+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:56.106501+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:57.106641+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:58.106784+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:59.106956+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:00.107091+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:01.107248+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:02.107394+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:03.107534+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:04.107695+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:05.108055+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:06.108201+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:07.108332+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:08.108455+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:09.108594+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:10.108711+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:11.108970+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:12.109114+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:13.109280+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:14.109462+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:15.109668+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:16.109837+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:17.109996+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:18.110163+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:19.110312+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:20.110448+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:21.110605+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:22.110777+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:23.110948+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:24.111091+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:25.111279+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:26.111386+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:27.111534+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:28.111625+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:29.111719+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:30.111869+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:31.111985+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:32.112105+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:33.112202+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:34.112389+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:35.112545+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:36.112667+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:37.112811+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:38.112963+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:39.113098+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:40.113245+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:41.113349+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:42.113474+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:43.113612+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:44.113736+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:45.113873+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:46.114038+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:47.114157+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:48.114271+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:49.114407+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:50.114539+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:51.114668+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:52.114807+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:53.114958+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:54.115124+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:55.115287+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:56.115447+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:57.115583+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:58.115684+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:59.116164+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:00.116319+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:01.116454+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:02.116683+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:03.116894+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:04.117107+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:05.117308+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:06.117501+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:07.117644+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:08.117807+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:09.117978+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:10.118130+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:11.118290+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:12.118446+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:13.118598+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:14.118775+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:15.118975+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:16.119139+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:17.119316+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:18.119481+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:19.119639+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:20.119823+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:21.119994+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:22.120130+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:23.120319+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:24.120460+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:25.120663+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:26.120827+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:27.121004+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:28.121124+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:29.121302+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:30.121447+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:31.121576+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:32.121700+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:33.121854+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:34.122003+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:35.122191+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:36.122344+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:37.122553+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:38.122708+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:39.122888+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:40.123088+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:41.123211+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:42.123363+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:43.123561+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:44.123718+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:45.123849+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:46.124003+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:47.124290+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:48.124462+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:49.124594+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:50.124753+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:51.125942+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:52.126087+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:53.126224+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:54.126368+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:55.126597+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:56.126744+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:57.126981+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:58.127128+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:59.128347+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:00.129783+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:01.131127+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:02.131863+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:03.132368+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:04.133365+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:05.133697+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:06.133843+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:07.134234+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:08.134807+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:09.135177+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:10.135444+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:11.135595+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:12.136605+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:13.137061+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:14.137355+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:15.137692+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:16.138024+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:17.138207+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:18.138548+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:19.138720+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:20.138977+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:21.139161+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:22.139294+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 18382848 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:23.139613+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:24.139749+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:25.139942+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:26.140233+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:27.140429+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:28.140622+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:29.140843+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:30.141029+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:31.141159+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:32.141429+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:33.141562+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:34.141730+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:35.141993+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:36.142174+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:37.142459+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:38.142686+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:39.142931+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:40.143177+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:41.143380+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:42.143721+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:43.143872+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:44.144080+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:45.144306+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:46.144558+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:47.144695+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:48.144872+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:49.145042+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:50.145285+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:51.145574+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:52.145763+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:53.145961+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:54.146162+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:55.146389+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:56.146601+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:57.146807+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:58.146979+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:59.147124+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:00.147399+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:01.147550+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:02.147697+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:03.147959+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:04.148111+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 18374656 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:05.148300+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:06.148448+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:07.148607+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:08.148761+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:09.149008+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:10.149155+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:11.149272+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:12.149424+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:13.149581+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:14.149739+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:15.149945+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:16.150065+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:17.150316+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:18.150512+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:19.150695+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:20.150841+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:21.150967+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:22.151152+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:23.151313+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:24.151476+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:25.151684+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:26.151877+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:27.152039+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:28.152175+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:29.152323+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:30.152481+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:31.152658+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:32.152824+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:33.152992+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:34.153125+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:35.153374+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:36.153550+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:37.153745+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:38.153950+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:39.154089+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:40.154243+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:41.154422+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:42.154595+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:43.154712+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:44.154856+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:45.155081+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:46.155294+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:47.155439+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:48.155578+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:49.155845+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:50.155992+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:51.156195+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:52.156340+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:53.156548+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:54.156689+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:55.156967+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:56.157101+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:57.157292+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:58.157415+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:59.157567+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:00.157715+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:01.157842+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:02.157970+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:03.158500+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:04.158628+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:05.158838+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:06.159032+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:07.159305+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:08.159476+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:09.159645+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:10.159804+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:11.159940+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:12.160051+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:13.160138+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:14.160221+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:15.160393+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:16.160527+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:17.160670+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:18.160729+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:19.160857+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:20.160996+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:21.161107+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:22.161221+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:23.161342+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:24.161449+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:25.161608+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:26.161748+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:27.161848+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:28.161979+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:29.162092+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:30.235238+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:31.235372+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:32.235519+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:33.235645+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:34.235799+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:35.236013+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:36.236130+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:37.236267+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:38.236463+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:39.236607+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:40.237064+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:41.237174+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:42.237298+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:43.237577+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:44.238032+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:45.238219+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:46.239084+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:47.239416+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:48.239980+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:49.240155+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:50.240450+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:51.240629+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:52.241214+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:53.241424+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:54.241691+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:55.242081+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:56.242389+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:57.242669+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:58.242816+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 18350080 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:59.243005+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 18350080 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:00.243160+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 18350080 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:01.243298+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 18350080 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:02.243437+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 18350080 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:03.243605+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 18350080 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:04.243859+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 18350080 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:05.244175+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 18350080 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:06.244320+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 18350080 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:07.244517+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 18350080 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:08.244666+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 18350080 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:09.244858+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 18350080 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:10.245086+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 18350080 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:11.245268+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 18350080 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:12.245394+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 18350080 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:13.245507+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 18350080 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:14.245608+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 18350080 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:15.245742+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 18350080 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:16.245930+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 18350080 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:17.246111+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 18350080 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:18.246255+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 18350080 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:19.246460+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 18350080 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:20.246578+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 18350080 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:21.246725+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 18350080 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:22.246885+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 18350080 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:23.247083+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 18350080 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:24.247209+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 18350080 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:25.247389+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:26.247514+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:27.247619+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:28.247710+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:29.247843+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:30.248004+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:31.248177+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:32.248344+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:33.248530+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:34.248693+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:35.248845+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:36.249000+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:37.249141+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:38.249327+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:39.249529+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:40.249643+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:41.249772+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:42.250006+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:43.250125+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:44.250263+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:45.250467+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:46.250591+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:47.250729+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:48.250857+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:49.251041+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:50.251170+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:51.251324+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:52.251481+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:53.251644+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:54.251796+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:55.251982+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:56.252232+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:57.252396+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:58.252545+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:59.252733+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:00.252877+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:01.253049+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:02.253197+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 18366464 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:03.253327+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:04.253501+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:05.253669+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:06.253791+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:07.253947+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:08.254092+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:09.254218+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:10.254360+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:11.254561+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:12.254700+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:13.254817+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:47 compute-0 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:47 compute-0 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:14.254947+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 18358272 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: do_command 'config diff' '{prefix=config diff}'
Nov 24 18:59:47 compute-0 ceph-osd[89581]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 24 18:59:47 compute-0 ceph-osd[89581]: do_command 'config show' '{prefix=config show}'
Nov 24 18:59:47 compute-0 ceph-osd[89581]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:15.255094+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: do_command 'counter dump' '{prefix=counter dump}'
Nov 24 18:59:47 compute-0 ceph-osd[89581]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 24 18:59:47 compute-0 ceph-osd[89581]: do_command 'counter schema' '{prefix=counter schema}'
Nov 24 18:59:47 compute-0 ceph-osd[89581]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 18210816 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:16.255210+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 18350080 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:47 compute-0 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: tick
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_tickets
Nov 24 18:59:47 compute-0 ceph-osd[89581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:17.255364+0000)
Nov 24 18:59:47 compute-0 ceph-osd[89581]: do_command 'log dump' '{prefix=log dump}'
Nov 24 18:59:47 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Nov 24 18:59:47 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/35131271' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 24 18:59:47 compute-0 ceph-mon[74927]: pgmap v1357: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:47 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1319776605' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 24 18:59:47 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/46257218' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 24 18:59:47 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2328258631' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 24 18:59:47 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/4250234144' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 24 18:59:47 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/35131271' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 24 18:59:48 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Nov 24 18:59:48 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3311424348' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 24 18:59:48 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1358: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:48 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:59:48 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 24 18:59:48 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2523414025' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 18:59:48 compute-0 rsyslogd[1008]: imjournal from <np0005533938:ceph-osd>: begin to drop messages due to rate-limiting
Nov 24 18:59:48 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Nov 24 18:59:48 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/640942757' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 24 18:59:48 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Nov 24 18:59:48 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4071233241' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 24 18:59:48 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Nov 24 18:59:48 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2601501531' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 24 18:59:48 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3311424348' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 24 18:59:48 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2523414025' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 18:59:48 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/640942757' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 24 18:59:48 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/4071233241' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 24 18:59:48 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2601501531' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 24 18:59:49 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15167 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:49 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15169 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:59:49 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15171 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:49 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15173 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:59:49 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15177 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:59:49 compute-0 ceph-mon[74927]: pgmap v1358: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:50 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1359: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:50 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Nov 24 18:59:50 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1448331632' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 24 18:59:50 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15181 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:59:50 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15185 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:59:50 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0) v1
Nov 24 18:59:50 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2898377880' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 24 18:59:50 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15187 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:59:50 compute-0 ceph-mon[74927]: from='client.15167 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:50 compute-0 ceph-mon[74927]: from='client.15169 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:59:50 compute-0 ceph-mon[74927]: from='client.15171 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:50 compute-0 ceph-mon[74927]: from='client.15173 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:59:50 compute-0 ceph-mon[74927]: from='client.15177 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:59:50 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1448331632' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 24 18:59:50 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2898377880' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 24 18:59:51 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 24 18:59:51 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1312726327' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 18:59:51 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15191 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:59:51 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Nov 24 18:59:51 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2356360020' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 24 18:59:51 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 18:59:51 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:18.057236+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 876544 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:19.057445+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 876544 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:20.057592+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 868352 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:21.057745+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 868352 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:22.057958+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 868352 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:23.058115+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 860160 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:24.058327+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 860160 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:25.058479+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 851968 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:26.058671+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 851968 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:27.058790+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 843776 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:28.058957+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 843776 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:29.059083+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 843776 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:30.059226+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 835584 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:31.059442+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 835584 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:32.059619+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66232320 unmapped: 819200 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:33.059779+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66232320 unmapped: 819200 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:34.059970+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66232320 unmapped: 819200 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:35.060125+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66240512 unmapped: 811008 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:36.060266+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66240512 unmapped: 811008 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:37.060411+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 802816 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:38.060608+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 802816 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:39.061208+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66256896 unmapped: 794624 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:40.061333+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 786432 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:41.061469+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 786432 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:42.061642+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 778240 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:43.061788+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 778240 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:44.061969+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66281472 unmapped: 770048 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:45.062144+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66281472 unmapped: 770048 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:46.062284+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 761856 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:47.062430+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 761856 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:48.062601+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 753664 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:49.062786+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 753664 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:50.062953+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 753664 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:51.063080+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 753664 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:52.063229+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 753664 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:53.063407+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 753664 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:54.063610+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 753664 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:55.063749+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 753664 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:56.063926+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 753664 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:57.064051+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 745472 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:58.064233+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 745472 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:26:59.064370+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 745472 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:00.064518+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 737280 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:01.064650+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 737280 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:02.064812+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 729088 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:03.064919+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 729088 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:04.065062+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 729088 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:05.065272+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 720896 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:06.065404+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 720896 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:07.065582+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 712704 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:08.065779+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 712704 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:09.065930+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 704512 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:10.066064+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 704512 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:11.066190+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 704512 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:12.066326+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 696320 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:13.066484+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 696320 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:14.066620+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 696320 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:15.066763+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 696320 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:16.066880+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66363392 unmapped: 688128 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:17.067074+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66363392 unmapped: 688128 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:18.067228+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 679936 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:19.067356+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 679936 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:20.067475+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66363392 unmapped: 688128 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:21.067611+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 679936 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:22.067750+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 679936 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:23.067889+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 679936 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:24.068100+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 679936 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:25.068252+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 671744 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:26.068406+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 663552 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:27.068614+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 663552 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:28.068842+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 663552 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:29.069033+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66404352 unmapped: 647168 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:30.069216+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66404352 unmapped: 647168 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:31.069368+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66404352 unmapped: 647168 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:32.069555+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 638976 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:33.069722+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 638976 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:34.069864+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 630784 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:35.069986+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 622592 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:36.070204+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 622592 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:37.070393+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 614400 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:38.070565+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 614400 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:39.070816+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 606208 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:40.071016+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 606208 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:41.071153+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 606208 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:42.071315+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 598016 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:43.071454+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 598016 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:44.071568+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 589824 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:45.071770+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 581632 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:46.071990+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 581632 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:47.072132+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 581632 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:48.072323+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 573440 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:49.072479+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 573440 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:50.072622+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 573440 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:51.072774+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 565248 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:52.072982+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 565248 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:53.073145+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 557056 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:54.073278+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 557056 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:55.073420+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 557056 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:56.073594+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 548864 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:57.073742+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 548864 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:58.073949+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 548864 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:27:59.074129+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 540672 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:00.074268+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 540672 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:01.074437+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 532480 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:02.074640+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 532480 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:03.074819+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 532480 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:04.074988+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 524288 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:05.075586+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 524288 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:06.076248+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 516096 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:07.076395+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 516096 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:08.076571+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 516096 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:09.076715+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 507904 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:10.077012+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 507904 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:11.077325+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 507904 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:12.077485+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 499712 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:13.077704+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 499712 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:14.077970+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 499712 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:15.078126+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 499712 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:16.078271+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 499712 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:17.078397+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 499712 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:18.078527+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 491520 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:19.078641+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 491520 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:20.078822+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 499712 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:21.079136+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 491520 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:22.079494+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 491520 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:23.079767+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 483328 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:24.080020+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 483328 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:25.080281+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 483328 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:26.080474+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 475136 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:27.080701+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 475136 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:28.080944+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 466944 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:29.081128+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 466944 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:30.081339+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 458752 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:31.081540+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 450560 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:32.081705+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 450560 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:33.081999+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 442368 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:34.082184+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 434176 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:35.082381+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 442368 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:36.082561+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 442368 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:37.082738+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 434176 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:38.083001+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 434176 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:39.083136+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 425984 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:40.083266+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 425984 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:41.083415+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 425984 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:42.083532+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 417792 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:43.083725+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 417792 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:44.083866+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 409600 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:45.084030+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 409600 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:46.084256+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 409600 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:47.084407+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 401408 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:48.084602+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 401408 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:49.084786+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 401408 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:50.084936+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 393216 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:51.085058+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 393216 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:52.085193+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 393216 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:53.085317+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 385024 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:54.085561+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 385024 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:55.085723+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 385024 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:56.085937+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 376832 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:57.086095+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 376832 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:58.086258+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 376832 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:28:59.086442+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 376832 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:00.086645+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 376832 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:01.086815+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 368640 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:02.086998+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 368640 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:03.087181+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 360448 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:04.087316+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 360448 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:05.087486+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 376832 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:06.087785+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 368640 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:07.087968+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 368640 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:08.088164+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 368640 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:09.090102+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 360448 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:10.091767+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 360448 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:11.093318+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 360448 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:12.093454+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 352256 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:13.093592+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 352256 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:14.094474+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 352256 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:15.094809+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 344064 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:16.095485+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 344064 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:17.096019+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 335872 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:18.096214+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 335872 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:19.096652+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 335872 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:20.096777+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 327680 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:21.096941+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 327680 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:22.097271+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 327680 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:23.097429+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 319488 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:24.097684+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 319488 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:25.097815+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 311296 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:26.098079+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 311296 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:27.098286+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 311296 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:28.098512+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 303104 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:29.098643+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 294912 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:30.098811+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 294912 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:31.098980+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 286720 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:32.099162+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 286720 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:33.099308+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 286720 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:34.099476+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66772992 unmapped: 278528 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:35.099665+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 270336 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:36.099835+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 262144 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:37.100033+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 262144 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:38.100213+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 262144 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:39.100420+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 253952 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:40.100545+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 245760 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:41.100722+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 237568 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:42.101068+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 237568 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:43.101235+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 237568 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:44.101358+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:45.101508+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 237568 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:46.101749+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 221184 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:47.101920+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 221184 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:48.102128+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 212992 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:49.102327+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 212992 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:50.102502+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 204800 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:51.102647+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 196608 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:52.102770+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 196608 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:53.102928+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 188416 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:54.103140+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 188416 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:55.103326+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 180224 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:56.103475+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 180224 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:57.103626+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 180224 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:58.103828+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 172032 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:29:59.104640+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 172032 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:00.104817+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 172032 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:01.105031+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 163840 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:02.105225+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 163840 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:03.105390+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 163840 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Cumulative writes: 5370 writes, 23K keys, 5370 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5370 writes, 751 syncs, 7.15 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5370 writes, 23K keys, 5370 commit groups, 1.0 writes per commit group, ingest: 18.36 MB, 0.03 MB/s
                                           Interval WAL: 5370 writes, 751 syncs, 7.15 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:04.105745+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 90112 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:05.105877+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 81920 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:06.106029+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 73728 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:07.106158+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 73728 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:08.106354+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 65536 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:09.106622+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 65536 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:10.106869+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 65536 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:11.107025+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 65536 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:12.107270+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 57344 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:13.107456+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 57344 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:14.107928+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 49152 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:15.108333+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 49152 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:16.108738+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 49152 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:17.110354+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 40960 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:18.110546+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 40960 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:19.111356+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 32768 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:20.112308+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 32768 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:21.112754+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 32768 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:22.113240+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 24576 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:23.113520+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 24576 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:24.113735+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 16384 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:25.114418+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 16384 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:26.114730+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 16384 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:27.114859+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 8192 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:28.115035+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 8192 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:29.115158+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 8192 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:30.115325+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 0 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:31.115452+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 0 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:32.115733+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 0 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:33.115877+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 1040384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:34.115970+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 1040384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:35.116136+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 1040384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:36.116296+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 1032192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:37.116637+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 1032192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:38.116961+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 1024000 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:39.117147+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 1024000 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:40.117273+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 1024000 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:41.117440+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 1024000 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:42.117597+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 1024000 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:43.117770+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 1024000 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:44.117907+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1015808 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:45.118178+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1015808 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:46.118337+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1015808 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:47.118525+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 1007616 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:48.118685+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 1007616 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:49.118816+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 999424 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:50.118952+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 999424 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:51.119123+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 999424 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:52.119250+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 991232 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:53.119381+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 991232 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:54.119492+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 991232 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:55.119627+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 983040 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:56.119769+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 983040 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:57.119927+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 974848 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:58.120113+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 974848 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:30:59.120251+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 966656 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:00.120394+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 966656 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:01.120518+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 966656 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:02.120711+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 958464 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:03.120919+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 958464 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:04.121082+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 958464 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:05.121214+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 950272 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:06.121403+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 942080 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:07.121596+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 942080 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:08.121814+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 933888 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:09.122012+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 933888 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:10.122197+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 925696 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:11.122365+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 925696 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:12.122523+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 925696 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:13.122644+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 917504 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:14.122759+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 917504 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:15.122971+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 901120 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:16.123081+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 901120 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:17.123256+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 901120 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:18.123469+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 901120 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:19.123625+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 892928 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:20.123769+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 892928 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:21.124028+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 892928 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:22.124154+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 884736 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:23.124292+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 884736 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:24.124466+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 876544 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:25.124606+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 876544 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:26.124761+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 868352 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:27.124948+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 868352 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:28.125114+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 868352 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:29.125273+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 868352 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 332.425140381s of 332.453826904s, submitted: 7
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:30.125410+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 540672 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:31.125522+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68878336 unmapped: 270336 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:32.125655+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68878336 unmapped: 270336 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:33.125802+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68878336 unmapped: 270336 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:34.125958+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68902912 unmapped: 245760 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:35.126071+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68927488 unmapped: 221184 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:36.126216+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68935680 unmapped: 212992 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:37.126352+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68935680 unmapped: 212992 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:38.126555+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68935680 unmapped: 212992 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:39.126673+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68943872 unmapped: 204800 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:40.126771+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68943872 unmapped: 204800 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:41.126910+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68952064 unmapped: 196608 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:42.127055+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68952064 unmapped: 196608 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:43.127172+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68960256 unmapped: 188416 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:44.127313+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68960256 unmapped: 188416 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:45.127495+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68960256 unmapped: 188416 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:46.127656+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 180224 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:47.127819+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 180224 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:48.127989+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68976640 unmapped: 172032 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:49.128132+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68976640 unmapped: 172032 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:50.128957+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 155648 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:51.129083+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 155648 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:52.129479+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 155648 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:53.129698+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69001216 unmapped: 147456 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:54.129920+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69001216 unmapped: 147456 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:55.130276+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69001216 unmapped: 147456 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:56.130443+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69009408 unmapped: 139264 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:57.130649+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69009408 unmapped: 139264 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:58.130826+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 131072 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:31:59.130969+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 131072 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:00.131272+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69025792 unmapped: 122880 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:01.131527+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69025792 unmapped: 122880 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:02.131644+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 114688 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:03.131822+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 114688 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:04.131980+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 114688 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:05.132145+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 106496 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:06.132323+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 106496 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:07.132440+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 106496 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:08.132618+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 106496 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:09.132810+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 106496 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:10.132963+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 106496 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:11.133115+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 106496 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:12.133294+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 106496 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:13.133443+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 106496 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:14.133602+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 106496 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:15.133720+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 106496 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:16.133861+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 106496 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:17.133953+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 106496 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:18.134140+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 106496 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:19.134294+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 106496 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:20.134444+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 98304 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:21.134641+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 98304 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:22.134761+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 98304 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:23.135000+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 98304 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:24.135162+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 98304 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:25.135316+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 98304 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:26.135511+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 98304 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:27.135690+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 98304 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:28.135843+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 98304 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:29.135961+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 98304 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:30.136103+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69058560 unmapped: 90112 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:31.136255+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69058560 unmapped: 90112 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:32.136405+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69058560 unmapped: 90112 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:33.136604+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69058560 unmapped: 90112 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:34.136750+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 81920 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:35.136988+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 73728 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:36.137144+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 73728 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:37.137378+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 73728 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:38.137594+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 73728 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:39.137775+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 73728 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:40.137982+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 73728 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:41.138174+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 73728 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:42.138307+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 73728 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:43.138431+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 73728 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:44.138556+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 57344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:45.138675+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 57344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:46.138863+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 57344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:47.139028+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 57344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:48.139684+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 57344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:49.139814+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 57344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:50.139963+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 49152 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:51.140146+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 40960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:52.140270+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 40960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:53.140479+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 40960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:54.140715+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 40960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:55.140857+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 40960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:56.141074+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 40960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:57.141227+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 40960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:58.141377+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 40960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:32:59.141547+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 40960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:00.141734+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 40960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:01.141963+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 40960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:02.142080+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 40960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:03.142206+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 40960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:04.142395+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 24576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:05.142700+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 16384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:06.142846+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 16384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:07.142994+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 16384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:08.143165+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 16384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:09.143417+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 16384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:10.143590+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 16384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:11.143716+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 16384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:12.144021+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 16384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:13.144183+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 16384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:14.144360+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 8192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:15.144502+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 8192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:16.144658+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 8192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:17.144818+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 8192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:18.145013+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 0 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:19.145198+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 0 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:20.145321+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 0 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:21.145448+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 0 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:22.145686+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 0 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:23.145827+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 0 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:24.146003+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69165056 unmapped: 1032192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:25.146213+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69165056 unmapped: 1032192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:26.146378+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69165056 unmapped: 1032192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:27.146532+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69165056 unmapped: 1032192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:28.147488+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69165056 unmapped: 1032192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:29.148207+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69165056 unmapped: 1032192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:30.148576+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:31.148961+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:32.149092+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:33.149291+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:34.149687+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:35.149805+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:36.149935+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:37.150051+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:38.150220+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:39.150364+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:40.150534+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69189632 unmapped: 1007616 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:41.150738+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69189632 unmapped: 1007616 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:42.150847+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69189632 unmapped: 1007616 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:43.150975+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69189632 unmapped: 1007616 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:44.151188+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:45.151335+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:46.151469+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:47.151616+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:48.151845+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:49.152034+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:50.152158+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:51.152298+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:52.152496+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:53.152655+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:54.152881+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:55.153126+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:56.153270+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:57.153398+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:58.153603+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:33:59.153731+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69214208 unmapped: 983040 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:00.153861+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69214208 unmapped: 983040 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:01.153957+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69214208 unmapped: 983040 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:02.154735+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69214208 unmapped: 983040 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:03.154874+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69214208 unmapped: 983040 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:04.155132+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:05.155275+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:06.155473+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:07.155698+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:08.155976+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:09.156230+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:10.156477+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:11.156679+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 942080 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:12.156864+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 942080 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:13.157065+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 942080 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:14.157348+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 942080 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:15.157565+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 942080 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:16.157747+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 942080 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:17.157990+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 942080 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:18.158156+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 942080 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:19.158307+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 942080 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:20.158496+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 942080 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:21.158667+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 942080 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:22.158796+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 942080 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:23.158975+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 942080 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:24.159119+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69271552 unmapped: 925696 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:25.159313+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69271552 unmapped: 925696 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:26.159496+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69271552 unmapped: 925696 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:27.159804+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69271552 unmapped: 925696 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:28.160013+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69271552 unmapped: 925696 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:29.160128+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69271552 unmapped: 925696 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:30.160341+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 917504 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:31.161496+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 917504 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:32.161630+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 917504 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:33.161831+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 917504 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:34.161960+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 917504 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:35.162099+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 917504 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:36.162285+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69287936 unmapped: 909312 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:37.162438+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69287936 unmapped: 909312 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:38.162609+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69287936 unmapped: 909312 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:39.162734+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69287936 unmapped: 909312 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:40.162912+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69287936 unmapped: 909312 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:41.163165+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69287936 unmapped: 909312 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:42.163366+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69296128 unmapped: 901120 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:43.163540+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69296128 unmapped: 901120 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:44.163686+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69320704 unmapped: 876544 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:45.163923+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69320704 unmapped: 876544 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:46.164105+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69320704 unmapped: 876544 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:47.164357+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69320704 unmapped: 876544 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:48.164577+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69320704 unmapped: 876544 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:49.164738+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69320704 unmapped: 876544 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:50.164930+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 868352 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:51.165100+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 868352 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:52.165241+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 868352 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:53.165379+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 868352 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:54.165513+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 868352 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:55.165673+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 868352 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:56.165834+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 868352 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:57.166029+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 868352 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:58.166153+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 868352 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:34:59.166280+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 868352 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:00.166414+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:01.166534+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 868352 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:02.166666+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 868352 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:03.166803+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 868352 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:04.166984+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 868352 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:05.167932+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69345280 unmapped: 851968 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:06.168124+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69353472 unmapped: 843776 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:07.168331+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69353472 unmapped: 843776 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:08.168546+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69353472 unmapped: 843776 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:09.168677+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69353472 unmapped: 843776 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:10.168800+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69353472 unmapped: 843776 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:11.168930+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69353472 unmapped: 843776 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:12.169058+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69353472 unmapped: 843776 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:13.169197+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69353472 unmapped: 843776 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: mgrc ms_handle_reset ms_handle_reset con 0x55ab26045c00
Nov 24 18:59:51 compute-0 ceph-osd[88544]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/536471675
Nov 24 18:59:51 compute-0 ceph-osd[88544]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/536471675,v1:192.168.122.100:6801/536471675]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: get_auth_request con 0x55ab27959800 auth_method 0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: mgrc handle_mgr_configure stats_period=5
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:14.169369+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:15.169484+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:16.169602+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:17.169819+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:18.169949+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:19.170076+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:20.170192+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 ms_handle_reset con 0x55ab26d95c00 session 0x55ab25fd8960
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26292400
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:21.170434+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:22.170571+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:23.170706+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:24.170849+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:25.171009+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:26.171154+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:27.171312+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:28.171505+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:29.171640+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:30.171761+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:31.171947+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:32.172184+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:33.172410+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:34.172618+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:35.172871+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:36.173137+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:37.173319+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:38.173587+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:39.173828+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:40.174008+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:41.174731+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:42.174958+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:43.175181+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:44.175462+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:45.175667+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 688128 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:46.175953+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69517312 unmapped: 679936 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:47.176115+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69517312 unmapped: 679936 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:48.176373+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69517312 unmapped: 679936 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:49.176653+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69517312 unmapped: 679936 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:50.177011+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69517312 unmapped: 679936 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:51.177194+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69525504 unmapped: 671744 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:52.177345+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69525504 unmapped: 671744 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:53.177505+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69525504 unmapped: 671744 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:54.177812+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69525504 unmapped: 671744 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:55.178067+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69525504 unmapped: 671744 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:56.178297+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69525504 unmapped: 671744 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:57.178474+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69525504 unmapped: 671744 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:58.178611+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69525504 unmapped: 671744 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:35:59.178815+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69525504 unmapped: 671744 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:00.178973+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69533696 unmapped: 663552 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:01.179115+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69533696 unmapped: 663552 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:02.179269+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69533696 unmapped: 663552 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:03.179523+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69533696 unmapped: 663552 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:04.179713+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69533696 unmapped: 663552 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:05.179868+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 647168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:06.180080+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 647168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:07.180287+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 647168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:08.180837+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 647168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:09.180971+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 647168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:10.181083+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 647168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:11.181218+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 647168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:12.181309+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 647168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:13.181413+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 647168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:14.181516+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 647168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:15.181673+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 647168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:16.181831+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 647168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:17.181959+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 647168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:18.182119+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 647168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:19.182270+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 647168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:20.182432+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69558272 unmapped: 638976 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:21.182576+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69558272 unmapped: 638976 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:22.182737+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69558272 unmapped: 638976 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 ms_handle_reset con 0x55ab27571400 session 0x55ab273dd2c0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26d95c00
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:23.182927+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 606208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:24.183056+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 606208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:25.183169+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 606208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:26.183270+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 606208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:27.183428+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 606208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:28.183605+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 606208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:29.183754+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 606208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:30.183891+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 606208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:31.184101+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 606208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:32.184276+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 606208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:33.184483+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 606208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:34.184736+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 606208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:35.184872+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 606208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:36.185014+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 606208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:37.185168+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 606208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:38.185318+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 606208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:39.185453+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69607424 unmapped: 589824 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:40.185601+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69607424 unmapped: 589824 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:41.185748+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69607424 unmapped: 589824 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:42.185905+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69607424 unmapped: 589824 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:43.185977+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69607424 unmapped: 589824 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:44.186352+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69607424 unmapped: 589824 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:45.186478+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69615616 unmapped: 581632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:46.186640+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69615616 unmapped: 581632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:47.186801+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69615616 unmapped: 581632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:48.186983+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69615616 unmapped: 581632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:49.187123+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69615616 unmapped: 581632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:50.187261+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 573440 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:51.187411+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 573440 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:52.187565+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 573440 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:53.187683+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 573440 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:54.187830+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 573440 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:55.187966+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 573440 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:56.188097+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 573440 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:57.188215+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 573440 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:58.188387+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 573440 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:36:59.188528+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 557056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:00.188769+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 557056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:01.188970+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 557056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:02.189109+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 557056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:03.189267+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 557056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:04.189488+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 557056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:05.189640+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 557056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:06.189889+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 557056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:07.190166+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 557056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:08.190346+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 557056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:09.190573+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 557056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:10.190730+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 557056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:11.190918+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 557056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:12.191028+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 557056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:13.191173+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 557056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:14.191332+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 548864 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:15.191464+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 548864 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:16.191641+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 548864 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:17.191769+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 548864 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:18.192341+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 548864 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:19.192464+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 532480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:20.192604+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 532480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:21.192729+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 532480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:22.192937+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 532480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:23.193079+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 532480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:24.193195+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 532480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:25.193321+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 532480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:26.193460+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 532480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:27.193579+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 532480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:28.193755+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 532480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:29.193923+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 532480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:30.194102+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 532480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:31.194244+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 532480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:32.194357+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 532480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:33.194471+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 532480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:34.194609+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 532480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:35.194729+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 532480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:36.194842+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 532480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:37.194966+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 532480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:38.195130+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 532480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:39.195280+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69681152 unmapped: 516096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:40.195446+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69681152 unmapped: 516096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:41.195597+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69681152 unmapped: 516096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:42.195792+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69681152 unmapped: 516096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:43.196017+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69681152 unmapped: 516096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:44.196135+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69681152 unmapped: 516096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:45.196489+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:46.196856+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:47.196974+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69697536 unmapped: 499712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:48.197149+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69697536 unmapped: 499712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:49.197405+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69697536 unmapped: 499712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:50.197522+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69705728 unmapped: 491520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:51.197723+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69705728 unmapped: 491520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:52.197888+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69705728 unmapped: 491520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:53.198018+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69705728 unmapped: 491520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:54.198139+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69705728 unmapped: 491520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:55.198394+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69713920 unmapped: 483328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:56.198512+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69713920 unmapped: 483328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:57.198762+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69713920 unmapped: 483328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:58.198954+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69713920 unmapped: 483328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:37:59.199150+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69713920 unmapped: 483328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:00.199275+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69713920 unmapped: 483328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:01.199394+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69713920 unmapped: 483328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:02.199586+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69713920 unmapped: 483328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:03.199779+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69713920 unmapped: 483328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:04.199960+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69713920 unmapped: 483328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:05.200110+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:06.200255+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:07.200366+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:08.200566+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:09.200716+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:10.200836+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:11.200971+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:12.201218+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:13.201342+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:14.201493+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:15.201664+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:16.201801+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:17.202077+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:18.202355+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:19.202483+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:20.202594+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:21.202733+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:22.202941+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:23.203067+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:24.203186+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:25.203423+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:26.204135+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:27.204264+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:28.204400+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:29.204526+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:30.204634+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:31.204812+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:32.204958+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:33.205098+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:34.205286+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:35.205406+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:36.205564+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:37.205693+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:38.205835+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:39.206011+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:40.206140+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:41.206252+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:42.206447+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:43.206607+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:44.206739+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:45.206882+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:46.207089+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:47.207212+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:48.207398+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:49.207539+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:50.207717+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:51.208186+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:52.208568+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:53.208953+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:54.209159+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:55.209279+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:56.209464+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:57.209618+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:58.209805+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:38:59.209968+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:00.210303+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:01.210452+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:02.210578+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:03.210695+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69746688 unmapped: 450560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:04.210840+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69746688 unmapped: 450560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:05.211003+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69746688 unmapped: 450560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:06.211132+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69746688 unmapped: 450560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:07.211257+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69746688 unmapped: 450560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:08.211421+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69746688 unmapped: 450560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:09.211538+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69746688 unmapped: 450560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:10.211654+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69746688 unmapped: 450560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:11.211800+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69746688 unmapped: 450560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:12.211936+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69746688 unmapped: 450560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:13.212064+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69746688 unmapped: 450560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:14.212226+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69746688 unmapped: 450560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:15.212360+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 442368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:16.212643+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 442368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:17.212826+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 442368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:18.212950+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 442368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:19.213092+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 442368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:20.213253+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 442368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:21.213397+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 442368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:22.213561+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 442368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:23.213735+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 442368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:24.213881+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 442368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:25.214040+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 442368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:26.214158+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 442368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:27.214289+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 442368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:28.214503+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 442368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:29.214622+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 442368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:30.214771+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 442368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:31.214919+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 442368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:32.215048+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 442368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:33.215240+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 442368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:34.215380+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 442368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:35.215512+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:36.215673+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:37.215840+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:38.216007+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:39.216150+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:40.216345+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:41.216498+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:42.216660+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:43.216781+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:44.216923+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:45.217067+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:46.217197+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:47.217382+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:48.217573+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:49.217735+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:50.217974+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:51.218205+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:52.218379+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:53.218497+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:54.218617+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:55.218722+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:56.218878+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:57.219044+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:58.219236+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:39:59.219374+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:00.219518+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:01.219752+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:02.219925+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:03.220093+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Cumulative writes: 5582 writes, 23K keys, 5582 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5582 writes, 857 syncs, 6.51 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 385024 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:04.220264+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:05.220413+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:06.220536+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:07.220639+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:08.220797+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:09.220937+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:10.221068+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:11.221191+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:12.221301+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:13.221440+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:14.221607+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:15.221757+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:16.222489+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:17.222701+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:18.222946+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:19.223048+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:20.223171+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:21.223300+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:22.223392+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:23.223589+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:24.223812+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:25.224003+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:26.224153+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:27.224280+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:28.224430+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:29.224592+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:30.224712+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:31.224962+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:32.225125+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:33.225247+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:34.225397+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:35.225593+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:36.225725+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:37.225840+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:38.225977+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:39.226105+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:40.226256+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:41.226380+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:42.226490+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:43.226619+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:44.226686+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:45.226765+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:46.226949+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:47.227075+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:48.227309+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:49.227462+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:50.227575+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:51.227720+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:52.227837+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:53.227985+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:54.228109+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:55.228294+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:56.228458+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:57.228588+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:58.228745+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:40:59.228951+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:00.229184+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:01.229341+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:02.229454+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:03.229617+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:04.229749+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:05.229882+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:06.230062+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:07.230173+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:08.230361+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:09.230490+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:10.230591+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:11.230703+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:12.230822+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:13.230947+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:14.231089+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:15.231234+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:16.231407+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:17.231531+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:18.231666+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:19.231782+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:20.231920+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:21.232099+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:22.232235+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:23.232361+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:24.232527+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:25.232630+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:26.232752+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:27.232920+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:28.233064+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 368640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:29.233214+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 360448 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 599.843994141s of 600.182861328s, submitted: 106
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:30.233330+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 69910528 unmapped: 1335296 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:31.233479+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 1245184 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:32.233651+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 1245184 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:33.233793+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 1245184 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:34.233934+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 1245184 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:35.234303+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 1245184 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:36.234495+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 1245184 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:37.234637+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 1245184 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:38.234779+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 1245184 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:39.234975+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:40.235132+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:41.235256+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:42.235393+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:43.235537+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:44.235782+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:45.235890+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:46.236037+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:47.236183+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:48.236361+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:49.236671+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:50.236843+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:51.237006+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:52.237132+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:53.237287+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:54.237840+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:55.237948+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:56.238098+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:57.238245+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:58.238392+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:41:59.238513+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1212416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:00.238639+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1212416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:01.238824+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1212416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:02.238997+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1212416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:03.239129+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1212416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:04.239252+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1212416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:05.239391+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1212416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:06.239550+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1212416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:07.239697+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1212416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:08.239858+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1212416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:09.239943+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1212416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:10.240064+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1212416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:11.240203+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1212416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:12.240317+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1212416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:13.240417+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1212416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:14.240516+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70041600 unmapped: 1204224 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:15.240641+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70041600 unmapped: 1204224 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:16.240751+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70041600 unmapped: 1204224 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:17.240998+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70041600 unmapped: 1204224 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:18.241917+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70041600 unmapped: 1204224 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:19.242023+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70057984 unmapped: 1187840 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:20.242137+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:21.242295+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:22.242461+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:23.242631+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:24.242800+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:25.242995+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:26.243106+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:27.243243+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:28.243452+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:29.243591+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:30.243714+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:31.243842+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:32.244707+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:33.244846+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:34.244972+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:35.245095+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:36.245218+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:37.245397+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:38.245600+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:39.245718+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:40.245866+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:41.246024+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:42.246196+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:43.246368+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:44.246511+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:45.246685+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:46.246799+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:47.246958+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:48.247167+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:49.247353+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:50.247518+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:51.247705+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:52.247837+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:53.247961+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:54.248074+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:55.248226+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:56.248388+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:57.248518+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:58.249194+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:42:59.249309+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 1138688 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:00.249427+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 1138688 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:01.249610+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 1138688 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:02.249728+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 1138688 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:03.249855+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 1138688 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:04.249999+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 1138688 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:05.250125+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 1138688 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:06.250260+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1130496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:07.251066+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1130496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:08.251857+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1130496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:09.252513+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1130496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:10.252821+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1130496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:11.253216+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1130496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:12.253569+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1130496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:13.254012+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1130496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:14.254451+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1130496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:15.254710+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1130496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:16.255008+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1130496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:17.255286+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1130496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:18.255533+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1130496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:19.255746+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1114112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:20.256010+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1114112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:21.256171+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1114112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:22.256401+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1114112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:23.256583+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1114112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:24.256746+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1114112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:25.256992+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1114112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:26.257185+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1114112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:27.257367+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1114112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:28.257600+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1114112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:29.257780+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1114112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:30.257953+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1114112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:31.258117+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1114112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:32.258295+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1114112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:33.258462+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1114112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:34.258586+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 1105920 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:35.258793+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 1105920 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:36.258995+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 1105920 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:37.259156+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 1105920 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:38.259330+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 1105920 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:39.260678+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:40.262862+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:41.264668+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:42.265444+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:43.266986+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:44.268003+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:45.268315+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:46.269474+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:47.270072+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:48.271026+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:49.271780+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:50.272104+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:51.272742+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:52.273239+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:53.273471+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:54.274024+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:55.274329+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:56.274605+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:57.275047+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:58.275344+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:43:59.275600+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:00.275856+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:01.275964+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:02.276122+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:03.276297+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:04.276604+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:05.276760+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:06.276970+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:07.277239+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:08.277527+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:09.277751+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:10.278002+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:11.278276+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:12.279353+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:13.280151+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:14.280395+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:15.281149+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:16.281629+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:17.282317+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:18.282988+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:19.283320+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:20.283754+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:21.284131+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:22.284307+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:23.284694+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:24.284994+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:25.285383+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:26.285793+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:27.286025+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:28.286200+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:29.286406+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:30.286692+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:31.286948+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:32.287185+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:33.287412+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:34.287618+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:35.287763+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:36.287945+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:37.288137+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:38.288387+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:39.288512+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:40.288768+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:41.288963+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:42.289136+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:43.289290+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:44.289427+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:45.289780+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:46.290467+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:47.291025+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:48.291316+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:49.291809+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:50.292263+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:51.292721+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:52.293184+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:53.293533+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:54.293793+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:55.294123+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:56.294337+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:57.294470+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:58.294748+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:44:59.295004+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:00.295250+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:01.295491+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:02.295711+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:03.295976+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:04.296115+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:05.296260+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:06.296439+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:07.296615+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:08.298432+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:09.298565+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:10.298780+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:11.298965+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:12.299135+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:13.299272+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:14.299513+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:15.299735+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:16.299936+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:17.300073+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:18.300293+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:19.300446+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:20.300618+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:21.300755+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:22.300885+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:23.301060+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:24.301183+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:25.301390+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:26.301551+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:27.301707+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:28.301997+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:29.302182+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:30.302402+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:31.302588+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:32.302750+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:33.302936+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:34.303089+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:35.303269+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:36.303504+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:37.303726+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:38.303971+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:39.304144+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:40.304286+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:41.304457+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:42.304642+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:43.304819+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:44.305005+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:45.305162+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:46.305305+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:47.305488+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:48.305666+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:49.305800+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:50.305965+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:51.306128+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:52.306327+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:53.306488+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:54.306642+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:55.306835+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:56.306973+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:57.307144+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:58.307345+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:45:59.307518+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:00.307712+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:01.307831+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:02.308033+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:03.308182+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:04.308313+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:05.308456+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:06.308623+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:07.308775+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:08.308985+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:09.309127+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:10.309307+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:11.309463+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:12.309625+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:13.309752+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:14.309893+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:15.310082+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:16.310283+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:17.310424+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:18.311080+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:19.311231+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:20.311438+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:21.311596+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:22.314972+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:23.319505+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:24.320327+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:25.322274+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:26.323995+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:27.324141+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:28.324806+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:29.325282+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:30.326231+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70221824 unmapped: 1024000 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:31.326806+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70221824 unmapped: 1024000 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:32.327550+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70221824 unmapped: 1024000 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:33.328245+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70221824 unmapped: 1024000 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:34.328580+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70221824 unmapped: 1024000 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:35.328801+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70221824 unmapped: 1024000 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:36.329473+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70221824 unmapped: 1024000 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:37.329614+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70221824 unmapped: 1024000 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:38.330145+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70221824 unmapped: 1024000 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:39.330275+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70238208 unmapped: 1007616 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:40.330637+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70238208 unmapped: 1007616 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:41.330960+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70238208 unmapped: 1007616 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:42.331219+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70238208 unmapped: 1007616 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:43.331436+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70238208 unmapped: 1007616 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:44.331601+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:45.331770+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:46.331996+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:47.332140+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:48.332357+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:49.332492+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:50.332650+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:51.332783+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:52.332955+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:53.333119+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:54.333239+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:55.333387+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:56.333505+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:57.333639+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:58.333824+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:46:59.333988+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:00.334118+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:01.334238+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:02.334408+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:03.334558+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:04.334720+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:05.334865+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:06.334997+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:07.335130+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:08.335364+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:09.335515+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:10.335649+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:11.335811+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:12.335998+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:13.336185+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:14.336337+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:15.336465+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:16.336617+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:17.336781+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:18.336978+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:19.337112+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:20.337304+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:21.337430+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:22.337578+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:23.337713+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:24.337839+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:25.338037+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:26.338292+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:27.338613+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:28.339745+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:29.339946+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:30.340060+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:31.340317+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:32.340959+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:33.341471+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:34.342043+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:35.342526+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:36.342770+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:37.343032+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:38.343308+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:39.343524+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:40.343747+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:41.343934+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:42.344255+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:43.344497+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:44.344638+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865330 data_alloc: 218103808 data_used: 143360
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:45.344873+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 120 handle_osd_map epochs [121,122], i have 120, src has [1,122]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 375.130676270s of 375.512084961s, submitted: 106
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab27571400
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab010/0x163000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70238208 unmapped: 1007616 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:46.345038+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 10248192 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:47.345385+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 123 ms_handle_reset con 0x55ab27571400 session 0x55ab2683e3c0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70320128 unmapped: 10240000 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:48.345693+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70320128 unmapped: 10240000 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:49.345830+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab2754c800
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938461 data_alloc: 218103808 data_used: 159744
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 123 heartbeat osd_stat(store_statfs(0x4fc640000/0x0/0x4ffc00000, data 0x52031a/0x5dd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 123 ms_handle_reset con 0x55ab2754c800 session 0x55ab25fd8960
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70377472 unmapped: 10182656 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:50.346009+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 10174464 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:51.346128+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 124 heartbeat osd_stat(store_statfs(0x4fc1d1000/0x0/0x4ffc00000, data 0x99031a/0xa4d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:52.346414+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:53.346578+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:54.346763+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942795 data_alloc: 218103808 data_used: 172032
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 124 heartbeat osd_stat(store_statfs(0x4fc1cd000/0x0/0x4ffc00000, data 0x991eb3/0xa50000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 124 handle_osd_map epochs [125,125], i have 125, src has [1,125]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:55.346937+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:56.347093+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fc1ca000/0x0/0x4ffc00000, data 0x993916/0xa53000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:57.347287+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:58.347465+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:47:59.347611+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945769 data_alloc: 218103808 data_used: 172032
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:00.347789+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:01.348005+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fc1ca000/0x0/0x4ffc00000, data 0x993916/0xa53000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:02.348147+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:03.348340+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:04.348503+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945769 data_alloc: 218103808 data_used: 172032
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:05.348673+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:06.348807+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:07.348939+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fc1ca000/0x0/0x4ffc00000, data 0x993916/0xa53000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:08.349124+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:09.349343+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945929 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:10.349538+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:11.349726+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:12.349892+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fc1ca000/0x0/0x4ffc00000, data 0x993916/0xa53000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:13.350106+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fc1ca000/0x0/0x4ffc00000, data 0x993916/0xa53000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:14.350280+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945929 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:15.350457+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:16.350625+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:17.350755+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:18.350972+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fc1ca000/0x0/0x4ffc00000, data 0x993916/0xa53000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:19.351159+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fc1ca000/0x0/0x4ffc00000, data 0x993916/0xa53000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945929 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:20.351334+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:21.351474+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fc1ca000/0x0/0x4ffc00000, data 0x993916/0xa53000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 125 handle_osd_map epochs [126,126], i have 126, src has [1,126]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 36.031490326s of 36.668552399s, submitted: 38
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 10133504 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:22.351626+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab2754cc00
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70459392 unmapped: 10100736 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:23.351775+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fc1c6000/0x0/0x4ffc00000, data 0x995493/0xa56000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 126 handle_osd_map epochs [126,127], i have 126, src has [1,127]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70459392 unmapped: 10100736 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:24.351972+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 127 ms_handle_reset con 0x55ab2754cc00 session 0x55ab29446b40
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953323 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70508544 unmapped: 10051584 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:25.352147+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fc1c4000/0x0/0x4ffc00000, data 0x997064/0xa59000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70508544 unmapped: 10051584 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:26.352326+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26d94800
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 128 ms_handle_reset con 0x55ab26d94800 session 0x55ab28eb4000
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70533120 unmapped: 10027008 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:27.352478+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 128 heartbeat osd_stat(store_statfs(0x4fc1c4000/0x0/0x4ffc00000, data 0x997064/0xa59000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70533120 unmapped: 10027008 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:28.352639+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab290a3400
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70533120 unmapped: 10027008 heap: 80560128 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:29.352775+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033441 data_alloc: 218103808 data_used: 176128
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab290a3000
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 18243584 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:30.352915+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 128 ms_handle_reset con 0x55ab290a3000 session 0x55ab29092000
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 18128896 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:31.353064+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 128 heartbeat osd_stat(store_statfs(0x4fa1c0000/0x0/0x4ffc00000, data 0x2998c30/0x2a5e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 128 handle_osd_map epochs [129,129], i have 129, src has [1,129]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab290a2c00
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 129 heartbeat osd_stat(store_statfs(0x4fa1c0000/0x0/0x4ffc00000, data 0x2998c30/0x2a5e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab290a2400
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 130 ms_handle_reset con 0x55ab290a3400 session 0x55ab293d34a0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 130 ms_handle_reset con 0x55ab290a2c00 session 0x55ab290921e0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 18087936 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:32.353208+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 130 ms_handle_reset con 0x55ab290a2400 session 0x55ab29447680
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.400087357s of 10.776869774s, submitted: 38
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 131 heartbeat osd_stat(store_statfs(0x4fa1ba000/0x0/0x4ffc00000, data 0x299afbb/0x2a63000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26d94800
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab2754cc00
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70877184 unmapped: 18079744 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:33.353341+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 131 handle_osd_map epochs [131,132], i have 131, src has [1,132]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 18071552 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:34.353461+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 132 ms_handle_reset con 0x55ab2754cc00 session 0x55ab290925a0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab2754c800
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 132 ms_handle_reset con 0x55ab26d94800 session 0x55ab273dd860
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982998 data_alloc: 218103808 data_used: 188416
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26d94800
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab2754cc00
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 132 handle_osd_map epochs [132,133], i have 132, src has [1,133]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 133 ms_handle_reset con 0x55ab26d94800 session 0x55ab28eb4f00
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 133 ms_handle_reset con 0x55ab2754c800 session 0x55ab29092780
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 16932864 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:35.353631+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 133 ms_handle_reset con 0x55ab2754cc00 session 0x55ab29447a40
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab290a2400
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fb9b0000/0x0/0x4ffc00000, data 0x9a0320/0xa6e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 134 ms_handle_reset con 0x55ab290a2400 session 0x55ab287cc5a0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 16883712 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:36.353807+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab290a2c00
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 134 handle_osd_map epochs [134,135], i have 134, src has [1,135]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:37.353982+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 16859136 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 135 ms_handle_reset con 0x55ab290a2c00 session 0x55ab287cc960
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26d94800
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:38.354196+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 16818176 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 136 ms_handle_reset con 0x55ab26d94800 session 0x55ab287cd2c0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc1a6000/0x0/0x4ffc00000, data 0x9a5688/0xa77000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:39.354442+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 16801792 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000265 data_alloc: 218103808 data_used: 221184
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab2754c800
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 136 handle_osd_map epochs [136,137], i have 136, src has [1,137]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:40.354572+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 16793600 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 137 ms_handle_reset con 0x55ab2754c800 session 0x55ab287cc960
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab2754cc00
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:41.354725+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 16752640 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc1a3000/0x0/0x4ffc00000, data 0x9a8e67/0xa7b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc1a3000/0x0/0x4ffc00000, data 0x9a8e67/0xa7b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:42.354855+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 16736256 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 138 ms_handle_reset con 0x55ab2754cc00 session 0x55ab29447a40
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab27571400
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:43.355016+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 16687104 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.387613297s of 10.994990349s, submitted: 180
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 140 ms_handle_reset con 0x55ab27571400 session 0x55ab287cda40
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:44.355172+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 72310784 unmapped: 16646144 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010927 data_alloc: 218103808 data_used: 245760
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab287b9c00
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:45.355301+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 72351744 unmapped: 16605184 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 141 ms_handle_reset con 0x55ab287b9c00 session 0x55ab29578b40
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc199000/0x0/0x4ffc00000, data 0x9ae158/0xa84000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26d94800
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab2754c800
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:46.355445+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 72343552 unmapped: 16613376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 142 ms_handle_reset con 0x55ab2754c800 session 0x55ab290930e0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab2754cc00
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:47.355552+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 15540224 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 143 ms_handle_reset con 0x55ab26d94800 session 0x55ab29578f00
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab27571400
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 143 ms_handle_reset con 0x55ab27571400 session 0x55ab295792c0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 143 ms_handle_reset con 0x55ab2754cc00 session 0x55ab29093860
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:48.355763+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 15491072 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fc18e000/0x0/0x4ffc00000, data 0x9b4b8b/0xa8d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:49.355976+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 15491072 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1023612 data_alloc: 218103808 data_used: 258048
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fc18e000/0x0/0x4ffc00000, data 0x9b4b8b/0xa8d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fc18e000/0x0/0x4ffc00000, data 0x9b4b8b/0xa8d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:50.356225+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 15441920 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:51.356430+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 15441920 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:52.356659+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 15441920 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fc18e000/0x0/0x4ffc00000, data 0x9b4b8b/0xa8d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 144 handle_osd_map epochs [144,145], i have 144, src has [1,145]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:53.356839+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 15433728 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:54.356985+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 15433728 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1025050 data_alloc: 218103808 data_used: 258048
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab293cac00
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 145 ms_handle_reset con 0x55ab293cac00 session 0x55ab295794a0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:55.357096+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 15433728 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab293cac00
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 145 ms_handle_reset con 0x55ab293cac00 session 0x55ab29579860
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:56.357246+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 15433728 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26d94800
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 145 ms_handle_reset con 0x55ab26d94800 session 0x55ab29579a40
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab2754c800
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fc18d000/0x0/0x4ffc00000, data 0x9b6686/0xa90000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:57.357370+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 15433728 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:58.357578+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 15433728 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 145 handle_osd_map epochs [147,147], i have 145, src has [1,147]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 145 handle_osd_map epochs [146,147], i have 145, src has [1,147]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.108797073s of 14.427786827s, submitted: 106
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 147 ms_handle_reset con 0x55ab2754c800 session 0x55ab29579e00
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab2754cc00
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:48:59.357725+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 14057472 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1039723 data_alloc: 218103808 data_used: 262144
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:00.357968+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 14057472 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:01.358143+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 14057472 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab27571400
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fc161000/0x0/0x4ffc00000, data 0x9dde07/0xabc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:02.358333+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 14049280 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab28b6a400
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 148 ms_handle_reset con 0x55ab28b6a400 session 0x55ab29447e00
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 148 ms_handle_reset con 0x55ab27571400 session 0x55ab291141e0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26d94800
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:03.358479+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 12992512 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 149 ms_handle_reset con 0x55ab26d94800 session 0x55ab291145a0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab2754c800
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 149 ms_handle_reset con 0x55ab2754c800 session 0x55ab29114960
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab27571400
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:04.359009+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 12984320 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 150 ms_handle_reset con 0x55ab27571400 session 0x55ab29114b40
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049693 data_alloc: 218103808 data_used: 270336
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab28b6a400
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:05.359157+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 12926976 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:06.359615+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 151 ms_handle_reset con 0x55ab28b6a400 session 0x55ab291150e0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 12918784 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab293cac00
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:07.359742+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 12910592 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 151 handle_osd_map epochs [151,152], i have 151, src has [1,152]
Nov 24 18:59:51 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0) v1
Nov 24 18:59:51 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1773919482' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 152 ms_handle_reset con 0x55ab293cac00 session 0x55ab29115860
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fc153000/0x0/0x4ffc00000, data 0x9e4d23/0xac9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:08.359986+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 12959744 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:09.360159+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 12959744 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058405 data_alloc: 218103808 data_used: 278528
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fc14f000/0x0/0x4ffc00000, data 0x9e68d8/0xacc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:10.360326+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 12926976 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fc14f000/0x0/0x4ffc00000, data 0x9e68d8/0xacc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:11.360452+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 12926976 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:12.360589+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 12926976 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26d94800
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 152 ms_handle_reset con 0x55ab26d94800 session 0x55ab29115c20
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab2754c800
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 152 ms_handle_reset con 0x55ab2754c800 session 0x55ab29115e00
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab27571400
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 152 ms_handle_reset con 0x55ab27571400 session 0x55ab29114000
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab28b6a400
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 152 ms_handle_reset con 0x55ab28b6a400 session 0x55ab291141e0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26a4f800
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.032873154s of 14.419190407s, submitted: 76
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 152 ms_handle_reset con 0x55ab26a4f800 session 0x55ab29447e00
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26d94800
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 153 ms_handle_reset con 0x55ab26d94800 session 0x55ab295794a0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab2754c800
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 153 ms_handle_reset con 0x55ab2754c800 session 0x55ab29579a40
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab27571400
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 153 ms_handle_reset con 0x55ab27571400 session 0x55ab29579e00
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab28b6a400
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 153 ms_handle_reset con 0x55ab28b6a400 session 0x55ab29093860
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:13.360712+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 11862016 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26a4e000
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 153 ms_handle_reset con 0x55ab26a4e000 session 0x55ab287cc3c0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26a4e000
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 153 ms_handle_reset con 0x55ab26a4e000 session 0x55ab28378000
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:14.360860+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 11862016 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26d94800
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 153 ms_handle_reset con 0x55ab26d94800 session 0x55ab2783c000
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab2754c800
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 153 ms_handle_reset con 0x55ab2754c800 session 0x55ab29447e00
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fc14d000/0x0/0x4ffc00000, data 0x9e8383/0xad0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1064614 data_alloc: 218103808 data_used: 286720
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab27571400
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab28b6a400
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:15.360973+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 12910592 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:16.361111+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 12910592 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26a4f400
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 153 ms_handle_reset con 0x55ab26a4f400 session 0x55ab290930e0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab290a3c00
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:17.361247+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 12910592 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 153 ms_handle_reset con 0x55ab290a3c00 session 0x55ab287cda40
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab290a3c00
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26a4e000
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 154 ms_handle_reset con 0x55ab26a4e000 session 0x55ab29115c20
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 154 ms_handle_reset con 0x55ab290a3c00 session 0x55ab29447a40
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26a4f400
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 154 ms_handle_reset con 0x55ab26a4f400 session 0x55ab287cc3c0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26d94800
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:18.361457+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 12902400 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 154 handle_osd_map epochs [154,155], i have 154, src has [1,155]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 155 ms_handle_reset con 0x55ab26d94800 session 0x55ab287cda40
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab2754c800
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:19.361598+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fc142000/0x0/0x4ffc00000, data 0x9ebf46/0xada000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 12869632 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076395 data_alloc: 218103808 data_used: 311296
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 156 ms_handle_reset con 0x55ab2754c800 session 0x55ab29093860
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:20.361748+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 12861440 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:21.361976+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 12861440 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab2754c800
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:22.362152+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 12861440 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 156 ms_handle_reset con 0x55ab2754c800 session 0x55ab29447e00
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26a4e000
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.323160172s of 10.468074799s, submitted: 55
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:23.362270+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 12861440 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 156 handle_osd_map epochs [156,157], i have 156, src has [1,157]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:24.362376+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 157 ms_handle_reset con 0x55ab26a4e000 session 0x55ab28378000
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 12812288 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079914 data_alloc: 218103808 data_used: 327680
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:25.362510+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 157 heartbeat osd_stat(store_statfs(0x4fc13f000/0x0/0x4ffc00000, data 0x9ef27a/0xadd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 12812288 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 157 ms_handle_reset con 0x55ab27571400 session 0x55ab291150e0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 157 ms_handle_reset con 0x55ab28b6a400 session 0x55ab29106000
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 157 heartbeat osd_stat(store_statfs(0x4fc13f000/0x0/0x4ffc00000, data 0x9ef27a/0xadd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26a4f400
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:26.362666+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 12779520 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 157 handle_osd_map epochs [157,158], i have 157, src has [1,158]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 158 ms_handle_reset con 0x55ab26a4f400 session 0x55ab29622000
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:27.362803+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 12738560 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:28.362994+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 12738560 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc13f000/0x0/0x4ffc00000, data 0x9f0e24/0xadd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:29.363173+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 12738560 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084363 data_alloc: 218103808 data_used: 327680
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:30.363355+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 12738560 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 159 ms_handle_reset con 0x55ab2754cc00 session 0x55ab295792c0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26a4e000
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 159 ms_handle_reset con 0x55ab26a4e000 session 0x55ab2960d2c0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:31.363516+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 12730368 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:32.363725+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab2754c800
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 159 ms_handle_reset con 0x55ab2754c800 session 0x55ab2960dc20
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab27571400
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 12730368 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _renew_subs
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 160 ms_handle_reset con 0x55ab27571400 session 0x55ab2960de00
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:33.363944+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 12730368 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc161000/0x0/0x4ffc00000, data 0x9d0437/0xabc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:34.364200+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 12730368 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083630 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:35.364405+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 12730368 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:36.364591+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 12730368 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc161000/0x0/0x4ffc00000, data 0x9d0437/0xabc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:37.364754+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 12730368 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:38.365003+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 12730368 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:39.365137+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 12730368 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083630 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:40.365247+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 12730368 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fc161000/0x0/0x4ffc00000, data 0x9d0437/0xabc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 160 handle_osd_map epochs [161,161], i have 161, src has [1,161]
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.256870270s of 17.725013733s, submitted: 102
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:41.365359+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 12730368 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:42.365465+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 12730368 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:43.365599+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 12730368 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:44.365731+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 12730368 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:45.365879+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 12730368 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:46.366126+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 12730368 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:47.366288+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 12730368 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:48.366491+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 12730368 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:49.366623+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 12713984 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:50.366831+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 12713984 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:51.367031+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 12713984 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:52.367204+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 12713984 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:53.367363+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 12713984 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:54.367501+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 12713984 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:55.367652+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 12713984 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:56.367818+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 12713984 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:57.367957+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 12713984 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:58.368181+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 12713984 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:49:59.368359+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 12713984 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:00.368572+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 12713984 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:01.368760+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 12713984 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:02.368932+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 12713984 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:03.369167+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 12713984 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.2 total, 600.0 interval
                                           Cumulative writes: 6867 writes, 27K keys, 6867 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6867 writes, 1384 syncs, 4.96 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1285 writes, 3543 keys, 1285 commit groups, 1.0 writes per commit group, ingest: 1.95 MB, 0.00 MB/s
                                           Interval WAL: 1285 writes, 527 syncs, 2.44 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:04.369350+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 12713984 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:51 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:51 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:05.369528+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 12713984 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:06.369787+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 12713984 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:07.369942+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 12713984 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:08.370157+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 12713984 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:51 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:09.370304+0000)
Nov 24 18:59:51 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:51 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 12697600 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:51 compute-0 ceph-mon[74927]: pgmap v1359: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:51 compute-0 ceph-mon[74927]: from='client.15181 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:59:51 compute-0 ceph-mon[74927]: from='client.15185 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:59:51 compute-0 ceph-mon[74927]: from='client.15187 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:59:51 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1312726327' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 18:59:51 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2356360020' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 24 18:59:51 compute-0 ceph-mon[74927]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 18:59:51 compute-0 ceph-mon[74927]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 18:59:51 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/1773919482' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:10.370473+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 12697600 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:11.370652+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 12697600 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:12.370817+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 12697600 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:13.371055+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: mgrc ms_handle_reset ms_handle_reset con 0x55ab27959800
Nov 24 18:59:52 compute-0 ceph-osd[88544]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/536471675
Nov 24 18:59:52 compute-0 ceph-osd[88544]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/536471675,v1:192.168.122.100:6801/536471675]
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: get_auth_request con 0x55ab26a4f800 auth_method 0
Nov 24 18:59:52 compute-0 ceph-osd[88544]: mgrc handle_mgr_configure stats_period=5
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 12550144 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:14.371252+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 12550144 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:15.371420+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 12550144 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:16.371587+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 12550144 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:17.371724+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 12550144 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:18.371953+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 12550144 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:19.372090+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 12550144 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:20.372240+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 ms_handle_reset con 0x55ab26292400 session 0x55ab25fd94a0
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab2754c400
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 12550144 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:21.372404+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 12550144 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:22.372601+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 12541952 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:23.372747+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 12541952 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:24.372967+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 12541952 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:25.373150+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 12541952 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:26.373363+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 12541952 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:27.373531+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 12541952 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:28.373726+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 12541952 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:29.373859+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 12541952 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:30.373993+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 12541952 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:31.374147+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 12541952 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:32.374314+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 12541952 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:33.374487+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:34.374670+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:35.374866+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:36.375195+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:37.375389+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:38.375560+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:39.375752+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:40.375938+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:41.376106+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:42.376258+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:43.376402+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:44.376586+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:45.376710+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:46.376868+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:47.376967+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:48.377119+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:49.377318+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:50.377438+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:51.377595+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:52.377718+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:53.377859+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:54.378006+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:55.378135+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:56.378288+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:57.378436+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:58.378631+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:50:59.378795+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 12525568 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:00.378978+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:01.379159+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:02.379348+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:03.379525+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:04.379685+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:05.379868+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:06.380010+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:07.380128+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:08.380289+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:09.380441+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:10.380618+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:11.380805+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:12.380988+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:13.381108+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:14.382712+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:15.382845+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:16.382956+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:17.383066+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:18.383208+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:19.383337+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:20.383456+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:21.383584+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:22.383696+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 ms_handle_reset con 0x55ab26d95c00 session 0x55ab26b38000
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: handle_auth_request added challenge on 0x55ab26292400
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:23.383854+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:24.383946+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 12517376 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086604 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:25.384065+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 12402688 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: do_command 'config diff' '{prefix=config diff}'
Nov 24 18:59:52 compute-0 ceph-osd[88544]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15e000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: do_command 'config show' '{prefix=config show}'
Nov 24 18:59:52 compute-0 ceph-osd[88544]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 24 18:59:52 compute-0 ceph-osd[88544]: do_command 'counter dump' '{prefix=counter dump}'
Nov 24 18:59:52 compute-0 ceph-osd[88544]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:26.384178+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: do_command 'counter schema' '{prefix=counter schema}'
Nov 24 18:59:52 compute-0 ceph-osd[88544]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 11878400 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:27.384293+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 11796480 heap: 88956928 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: do_command 'log dump' '{prefix=log dump}'
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:28.384405+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Nov 24 18:59:52 compute-0 ceph-osd[88544]: do_command 'perf dump' '{prefix=perf dump}'
Nov 24 18:59:52 compute-0 ceph-osd[88544]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 22765568 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Nov 24 18:59:52 compute-0 ceph-osd[88544]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Nov 24 18:59:52 compute-0 ceph-osd[88544]: do_command 'perf schema' '{prefix=perf schema}'
Nov 24 18:59:52 compute-0 ceph-osd[88544]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:29.384532+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 22913024 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 109.142707825s of 109.160781860s, submitted: 57
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:30.384631+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 23003136 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:31.384760+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [0,0,0,0,1])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 22937600 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:32.385332+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 22937600 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:33.385525+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 22937600 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:34.385686+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 22937600 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:35.385804+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 22937600 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:36.385964+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 22937600 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:37.386088+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 22937600 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:38.386253+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 22937600 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:39.386372+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 22937600 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:40.386489+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 22937600 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:41.386651+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 22937600 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:42.386791+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 22937600 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:43.386938+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 22937600 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:44.387054+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 22937600 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:45.387172+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 22937600 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:46.387945+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 22937600 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:47.388083+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 22937600 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:48.388422+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 22937600 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:49.388997+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 22937600 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:50.389242+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 22937600 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:51.389726+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 22937600 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:52.390127+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 22937600 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:53.390349+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 22937600 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:54.390486+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 22937600 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:55.390827+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 22929408 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:56.390945+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 22929408 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:57.391057+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 22929408 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:58.391196+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 22929408 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:51:59.391378+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 22929408 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:00.391527+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 22929408 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:01.391791+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 22929408 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:02.391948+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 22929408 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:03.392098+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 22921216 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:04.392308+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 22921216 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:05.392455+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 22921216 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:06.392600+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 22921216 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:07.392743+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 22921216 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:08.392923+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 22921216 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:09.393049+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 22921216 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:10.393181+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 22921216 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:11.393332+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 22921216 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:12.393472+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 22921216 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:13.393601+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 22921216 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:14.393772+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 22921216 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:15.393884+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 22921216 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:16.393965+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 22921216 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:17.394118+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 22921216 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:18.394299+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 22921216 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:19.394443+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 22921216 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:20.394650+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 22921216 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:21.394796+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 22921216 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:22.394976+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 22921216 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:23.395172+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 22921216 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:24.395390+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77078528 unmapped: 22921216 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:25.395569+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 22913024 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:26.395701+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 22913024 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:27.395953+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 22913024 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:28.396083+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 22913024 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:29.396274+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 22913024 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:30.396482+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:31.396618+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 22913024 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:32.396792+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 22913024 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:33.396974+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 22896640 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:34.397113+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 22896640 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:35.397320+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 22888448 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:36.397507+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 22888448 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:37.397726+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 22888448 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:38.397966+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 22888448 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:39.398161+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 22888448 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:40.398305+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 22888448 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:41.398510+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 22888448 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:42.398690+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 22888448 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:43.398890+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 22888448 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:44.399090+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 22888448 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:45.399216+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 22888448 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 podman[300282]: 2025-11-24 18:59:52.021570737 +0000 UTC m=+0.110455776 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:46.399361+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 22888448 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:47.399508+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 22888448 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:48.399735+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 22888448 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:49.399917+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 22888448 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:50.400116+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 22888448 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:51.400413+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 22888448 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:52.400567+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 22888448 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:53.400733+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 22872064 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:54.400859+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 22872064 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:55.401015+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 22872064 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:56.401255+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 22872064 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:57.401434+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 22872064 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:58.401624+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 22872064 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:52:59.401797+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 22872064 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:00.401985+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 22863872 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:01.402139+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 22863872 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:02.402350+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 22863872 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:03.402497+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 22863872 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:04.402692+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 22863872 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:05.402830+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 22863872 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:06.402968+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 22863872 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:07.403190+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 22863872 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 podman[300283]: 2025-11-24 18:59:52.030760944 +0000 UTC m=+0.111387189 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:08.403405+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 22863872 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:09.403576+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 22863872 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:10.404000+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 22863872 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:11.404111+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 22863872 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:12.404236+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 22863872 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:13.404431+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 22847488 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:14.404628+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 22847488 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:15.404837+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 22847488 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:16.404999+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 22847488 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:17.405158+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 22847488 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:18.405328+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 22847488 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:19.405471+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 22847488 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:20.405668+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 22847488 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:21.405828+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 22847488 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:22.405988+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 22847488 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:23.406138+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 22847488 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:24.406300+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 22847488 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:25.406402+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 22847488 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:26.406571+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 22847488 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:27.406713+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 22847488 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:28.406937+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 22847488 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:29.407055+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 22847488 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:30.407206+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 22847488 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:31.407328+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 22847488 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:32.407405+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 22847488 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:33.407496+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 22831104 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:34.407633+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 22831104 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:35.407785+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 22831104 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:36.407968+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 22831104 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:37.408089+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 22831104 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:38.408269+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 22831104 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:39.408418+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 22831104 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:40.408583+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 22831104 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:41.408720+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 22831104 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:42.408982+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 22831104 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:43.409110+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 22831104 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:44.409247+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 22831104 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:45.409413+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 22831104 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:46.409607+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 22831104 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:47.409754+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 22831104 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:48.409944+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 22831104 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:49.410069+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 22831104 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:50.410231+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 22831104 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:51.410401+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 22831104 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:52.410625+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 22831104 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:53.410850+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 22814720 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:54.411047+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 22814720 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:55.411232+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 22814720 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:56.411356+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 22814720 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:57.411496+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 22806528 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:58.411714+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 22806528 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:53:59.411860+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 22806528 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:00.412006+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 22806528 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:01.412146+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 22806528 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:02.412297+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 22806528 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:03.412449+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 22806528 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:04.412642+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 22790144 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:05.412828+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 22790144 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:06.412967+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 22790144 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:07.413194+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 22790144 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:08.413445+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 22790144 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:09.413576+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 22790144 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:10.413732+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 22790144 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:11.413945+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 22790144 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:12.414159+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 22790144 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:13.414320+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 22773760 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:14.414432+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 22773760 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:15.414562+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 22765568 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:16.414728+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 22765568 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:17.414970+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 22765568 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:18.415264+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 22765568 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:19.415458+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 22765568 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:20.415741+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 22765568 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:21.416009+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 22765568 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:22.416214+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 22765568 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:23.416463+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 22765568 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 podman[300284]: 2025-11-24 18:59:52.036496856 +0000 UTC m=+0.122967845 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.build-date=20251118)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:24.416630+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 22765568 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:25.416811+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 22765568 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:26.417063+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 22765568 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:27.417338+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 22765568 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:28.417570+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 22765568 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:29.417714+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 22765568 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:30.417879+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 22765568 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:31.418027+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 22765568 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:32.418219+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 22765568 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:33.418345+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 22749184 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:34.418484+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 22749184 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:35.418617+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 22749184 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:36.418828+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 22749184 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:37.419013+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 22749184 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:38.419180+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 22749184 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:39.419286+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 22749184 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:40.419512+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 22749184 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:41.419679+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 22749184 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:42.419808+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 22749184 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:43.420023+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 22749184 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:44.420138+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 22749184 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:45.420298+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 22749184 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:46.420436+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 22749184 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:47.420569+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 22749184 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:48.420729+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 22749184 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:49.420864+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 22749184 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:50.421124+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 22749184 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:51.421274+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:52.421445+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 22749184 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:53.421578+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 22749184 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:54.421702+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 22732800 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:55.421872+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 22732800 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:56.422038+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 22732800 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:57.422208+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 22732800 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:58.422398+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 22732800 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:54:59.424450+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 22732800 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:00.425223+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 22732800 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:01.426795+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 22732800 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:02.427179+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 22732800 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:03.427650+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 22732800 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:04.427833+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 22732800 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:05.428694+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 22732800 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:06.429027+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 22732800 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:07.429613+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 22732800 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:08.430231+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 22732800 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:09.430623+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 22732800 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:10.431075+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 22732800 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:11.431224+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 22732800 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:12.431830+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 22732800 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:13.432166+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 22716416 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:14.432387+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 22716416 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:15.432936+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 22716416 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:16.433369+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 22716416 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:17.433573+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 22716416 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:18.433784+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 22716416 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:19.434234+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 22716416 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:20.434382+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 22716416 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:21.434648+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 22716416 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:22.434794+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 22716416 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:23.434955+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 22716416 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:24.435074+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 22716416 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:25.435299+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 22716416 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:26.435501+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 22716416 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:27.435712+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 22716416 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:28.436012+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 22716416 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:29.436198+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 22716416 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:30.436412+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 22716416 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:31.438235+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 22716416 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:32.438823+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 22716416 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:33.439857+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 22700032 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:34.440146+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 22700032 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:35.440733+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 22700032 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:36.441768+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 22700032 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:37.442749+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 22700032 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:38.443158+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 22700032 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:39.444006+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 22700032 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:40.444390+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 22700032 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:41.444688+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 22700032 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:42.444998+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 22700032 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:43.445383+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 22700032 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:44.445833+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 22700032 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:45.446016+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 22700032 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:46.446266+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 22700032 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:47.446414+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 22700032 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:48.446604+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 22700032 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:49.446756+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 22700032 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:50.446961+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 22700032 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:51.447191+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 22700032 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:52.447395+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 22700032 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:53.447611+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 22683648 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:54.447974+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 22683648 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:55.448215+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 22683648 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:56.448483+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 22683648 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:57.448686+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 22683648 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:58.448957+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 22683648 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:55:59.449154+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 22683648 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:00.449312+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 22683648 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:01.449548+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 22683648 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:02.449711+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 22683648 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:03.449842+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 22683648 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:04.449961+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 22683648 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:05.450196+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 22683648 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:06.450369+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 22683648 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:07.450527+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 22683648 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:08.450690+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 22683648 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:09.450849+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 22683648 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:10.451029+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 22683648 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:11.451198+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 22683648 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:12.451347+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 22683648 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:13.451468+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 22667264 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:14.451639+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 22667264 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:15.451824+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 22667264 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:16.452010+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 22667264 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:17.452144+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 22667264 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:18.452309+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 22667264 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:19.452436+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 22667264 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:20.452607+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 22667264 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:21.452746+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 22667264 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:22.452963+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 22667264 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:23.453087+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 22667264 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:24.453264+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 22667264 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:25.453444+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 22667264 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:26.453575+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 22667264 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:27.453729+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 22667264 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:28.454071+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 22667264 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:29.454232+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 22667264 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:30.454378+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 22667264 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:31.454507+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 22667264 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:32.454672+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 22667264 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:33.454845+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 22667264 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:34.454986+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 22667264 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:35.455156+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 22667264 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:36.455363+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 22667264 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:37.455504+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 22667264 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:38.455748+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 22667264 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:39.455934+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 22667264 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:40.456202+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 22667264 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:41.456394+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 22667264 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:42.456570+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 22667264 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:43.456760+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 22667264 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:44.456940+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 22667264 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:45.457143+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 22667264 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:46.457339+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 22667264 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:47.458358+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 22667264 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:48.458681+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 22667264 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:49.458858+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 22667264 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:50.459059+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 22667264 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:51.459211+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 22667264 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:52.459382+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 22667264 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:53.459533+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 22650880 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:54.459747+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 22650880 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:55.459950+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 22650880 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:56.460114+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 22650880 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:57.460313+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 22650880 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:58.460514+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 22650880 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:56:59.460638+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 22650880 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:00.460777+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 22650880 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:01.460966+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 22650880 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:02.461123+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 22650880 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:03.461310+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 22650880 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:04.461476+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 22634496 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:05.461670+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 22634496 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:06.461947+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 22634496 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:07.462107+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 22634496 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:08.462297+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 22634496 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:09.462428+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 22634496 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:10.462609+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 22634496 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:11.462772+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 22634496 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:12.462951+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 22634496 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:13.463099+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 22618112 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:14.463236+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 22618112 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:15.463368+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 22618112 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:16.463568+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 22618112 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:17.463705+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 22618112 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:18.463880+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 22618112 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:19.464056+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 22618112 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:20.464208+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 22618112 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:21.464331+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 22618112 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:22.464495+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 22618112 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:23.465205+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 22618112 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:24.465360+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 22618112 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:25.465535+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 22618112 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:26.465710+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 22618112 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:27.466009+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 22618112 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:28.466217+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 22618112 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:29.466560+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 22618112 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:30.466688+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 22618112 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:31.466803+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 22618112 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:32.466946+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 22618112 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:33.467095+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:34.467269+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:35.467407+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:36.467570+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:37.467698+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:38.467894+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:39.468179+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:40.468814+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:41.469850+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:42.470344+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:43.470701+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:44.471350+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:45.471959+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:46.472511+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:47.472878+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:48.473261+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:49.473396+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:50.473734+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:51.474028+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:52.474249+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:53.474501+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:54.474720+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:55.474888+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:56.475069+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:57.475201+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:58.475402+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:57:59.475527+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:00.475761+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:01.476076+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:02.476262+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:03.476514+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:04.476714+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:05.476956+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:06.477172+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:07.477326+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:08.477563+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:09.477782+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:10.477972+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:11.478208+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:12.478390+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:13.478562+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:14.478690+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:15.478854+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:16.479028+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:17.479168+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:18.479352+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:19.479478+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:20.479624+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:21.479799+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:22.480018+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:23.480177+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:24.480348+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:25.480459+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:26.480569+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:27.480695+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:28.480861+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:29.481035+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:30.481137+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:31.481252+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:32.481389+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:33.481538+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:34.481644+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:35.481747+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:36.481886+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:37.482090+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:38.482298+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:39.482454+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:40.482599+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:41.482775+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:42.482949+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:43.483065+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 22601728 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:44.483217+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 22585344 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:45.483363+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 22585344 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:46.483545+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 22585344 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:47.483679+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 22585344 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:48.483834+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 22585344 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:49.483965+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 22585344 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:50.484151+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 22585344 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:51.484325+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 22585344 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:52.484484+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 22585344 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:53.484622+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 22585344 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:54.484801+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 22585344 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:55.484954+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 22585344 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:56.485103+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 22585344 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:57.485259+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 22585344 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:58.485448+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 22585344 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:58:59.485597+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 22585344 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:00.485760+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 22585344 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:01.485948+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 22585344 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:02.486072+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 22585344 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:03.486196+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 22585344 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:04.486314+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 22568960 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:05.486480+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 22552576 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:06.486635+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 22552576 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:07.486753+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 22552576 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:08.486885+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 22552576 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:09.487089+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 22552576 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:10.487231+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 22552576 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:11.487346+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 22552576 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:12.487468+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 22552576 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:13.488955+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 22552576 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:14.489089+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 22552576 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:15.489207+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 22552576 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:16.489317+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 22552576 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 18:59:52 compute-0 ceph-osd[88544]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 18:59:52 compute-0 ceph-osd[88544]: bluestore.MempoolThread(0x55ab252ddb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085724 data_alloc: 218103808 data_used: 323584
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:17.489450+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 22552576 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:18.489607+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 22552576 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: do_command 'config diff' '{prefix=config diff}'
Nov 24 18:59:52 compute-0 ceph-osd[88544]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 24 18:59:52 compute-0 ceph-osd[88544]: do_command 'config show' '{prefix=config show}'
Nov 24 18:59:52 compute-0 ceph-osd[88544]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:19.489753+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: do_command 'counter dump' '{prefix=counter dump}'
Nov 24 18:59:52 compute-0 ceph-osd[88544]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 22577152 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: do_command 'counter schema' '{prefix=counter schema}'
Nov 24 18:59:52 compute-0 ceph-osd[88544]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:20.490157+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 22077440 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0x9d1e9a/0xabf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x2fdf9c7), peers [1,2] op hist [])
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: tick
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_tickets
Nov 24 18:59:52 compute-0 ceph-osd[88544]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T18:59:21.490264+0000)
Nov 24 18:59:52 compute-0 ceph-osd[88544]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 22372352 heap: 99999744 old mem: 2845415832 new mem: 2845415832
Nov 24 18:59:52 compute-0 ceph-osd[88544]: do_command 'log dump' '{prefix=log dump}'
Nov 24 18:59:52 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1360: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:52 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15201 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:52 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Nov 24 18:59:52 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4031387502' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 24 18:59:53 compute-0 ceph-mon[74927]: from='client.15191 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 18:59:53 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/4031387502' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 24 18:59:53 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0) v1
Nov 24 18:59:53 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3038136324' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 24 18:59:53 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:59:53 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Nov 24 18:59:53 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3209626049' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 24 18:59:53 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Nov 24 18:59:53 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2613842162' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 24 18:59:54 compute-0 ceph-mon[74927]: pgmap v1360: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:54 compute-0 ceph-mon[74927]: from='client.15201 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:54 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3038136324' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 24 18:59:54 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3209626049' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 24 18:59:54 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2613842162' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 24 18:59:54 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1361: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:54 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15211 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:54 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Nov 24 18:59:54 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3588053764' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 24 18:59:54 compute-0 systemd[1]: Starting Hostname Service...
Nov 24 18:59:54 compute-0 systemd[1]: Started Hostname Service.
Nov 24 18:59:55 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3588053764' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 24 18:59:55 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Nov 24 18:59:55 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2550684010' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 24 18:59:55 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15217 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:55 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Nov 24 18:59:55 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4261587935' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 24 18:59:56 compute-0 ceph-mon[74927]: pgmap v1361: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:56 compute-0 ceph-mon[74927]: from='client.15211 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:56 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/2550684010' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 24 18:59:56 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/4261587935' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 24 18:59:56 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1362: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:56 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15221 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:56 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15223 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:56 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Nov 24 18:59:56 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3689138692' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 24 18:59:57 compute-0 ceph-mon[74927]: from='client.15217 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:57 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3689138692' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 24 18:59:57 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Nov 24 18:59:57 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/750453072' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 24 18:59:57 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15229 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:57 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15231 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:57 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:59:57 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 18:59:57 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:59:57 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 18:59:57 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:59:57 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 18:59:57 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:59:57 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:59:57 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:59:57 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 18:59:57 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:59:57 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 18:59:57 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:59:57 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:59:57 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:59:57 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 18:59:57 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:59:57 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 18:59:57 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:59:57 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 18:59:57 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 18:59:57 compute-0 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 18:59:58 compute-0 ceph-mon[74927]: pgmap v1362: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:58 compute-0 ceph-mon[74927]: from='client.15221 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:58 compute-0 ceph-mon[74927]: from='client.15223 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:58 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/750453072' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 24 18:59:58 compute-0 ceph-mon[74927]: from='client.15229 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:58 compute-0 ceph-mon[74927]: from='client.15231 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:58 compute-0 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1363: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:58 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Nov 24 18:59:58 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3841524255' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 24 18:59:58 compute-0 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 18:59:58 compute-0 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Nov 24 18:59:58 compute-0 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/46222097' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 24 18:59:58 compute-0 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15237 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 18:59:59 compute-0 ceph-mon[74927]: pgmap v1363: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 18:59:59 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/3841524255' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 24 18:59:59 compute-0 ceph-mon[74927]: from='client.? 192.168.122.100:0/46222097' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 24 18:59:59 compute-0 ceph-mon[74927]: from='client.15237 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
